00:00:00.001 Started by upstream project "autotest-nightly" build number 3787 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3167 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.127 Fetching changes from the remote Git repository 00:00:00.131 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.202 > git --version # 'git version 2.39.2' 00:00:00.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.234 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.234 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.900 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.909 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.918 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:05.918 > git config core.sparsecheckout # timeout=10 00:00:05.928 > git read-tree -mu HEAD # timeout=10 00:00:05.941 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:05.958 Commit message: "pool: fixes for VisualBuild class" 00:00:05.959 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:06.066 [Pipeline] Start of Pipeline 00:00:06.076 [Pipeline] library 00:00:06.077 Loading library shm_lib@master 00:00:06.077 Library shm_lib@master is cached. Copying from home. 00:00:06.090 [Pipeline] node 00:00:06.100 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.102 [Pipeline] { 00:00:06.113 [Pipeline] catchError 00:00:06.114 [Pipeline] { 00:00:06.125 [Pipeline] wrap 00:00:06.133 [Pipeline] { 00:00:06.137 [Pipeline] stage 00:00:06.139 [Pipeline] { (Prologue) 00:00:06.304 [Pipeline] sh 00:00:06.590 + logger -p user.info -t JENKINS-CI 00:00:06.609 [Pipeline] echo 00:00:06.611 Node: CYP9 00:00:06.616 [Pipeline] sh 00:00:06.914 [Pipeline] setCustomBuildProperty 00:00:06.924 [Pipeline] echo 00:00:06.925 Cleanup processes 00:00:06.931 [Pipeline] sh 00:00:07.217 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.217 2231068 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.233 [Pipeline] sh 00:00:07.519 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.519 ++ grep -v 'sudo pgrep' 00:00:07.519 ++ awk '{print $1}' 00:00:07.519 + sudo kill -9 00:00:07.519 + true 00:00:07.534 [Pipeline] cleanWs 00:00:07.544 [WS-CLEANUP] Deleting project workspace... 00:00:07.544 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.552 [WS-CLEANUP] done 00:00:07.556 [Pipeline] setCustomBuildProperty 00:00:07.569 [Pipeline] sh 00:00:07.853 + sudo git config --global --replace-all safe.directory '*' 00:00:07.928 [Pipeline] nodesByLabel 00:00:07.930 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.940 [Pipeline] httpRequest 00:00:07.945 HttpMethod: GET 00:00:07.946 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.952 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.967 Response Code: HTTP/1.1 200 OK 00:00:07.967 Success: Status code 200 is in the accepted range: 200,404 00:00:07.967 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:11.129 [Pipeline] sh 00:00:11.413 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:11.428 [Pipeline] httpRequest 00:00:11.433 HttpMethod: GET 00:00:11.435 URL: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:11.435 Sending request to url: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:11.452 Response Code: HTTP/1.1 200 OK 00:00:11.453 Success: Status code 200 is in the accepted range: 200,404 00:00:11.453 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:44.221 [Pipeline] sh 00:00:44.509 + tar --no-same-owner -xf spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:47.065 [Pipeline] sh 00:00:47.349 + git -C spdk log --oneline -n5 00:00:47.349 e55c9a812 vbdev_error: decrement error_num atomically 00:00:47.349 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:00:47.349 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:00:47.349 f470a0dc6 event: do not call reactor events from spdk_thread context 00:00:47.349 8d3fdcaba nvmf: cleanup maximum number of subsystem namespace remanent code 00:00:47.362 [Pipeline] } 00:00:47.376 [Pipeline] // stage 00:00:47.384 [Pipeline] stage 00:00:47.386 [Pipeline] { (Prepare) 00:00:47.399 [Pipeline] writeFile 00:00:47.415 [Pipeline] sh 00:00:47.699 + logger -p user.info -t JENKINS-CI 00:00:47.712 [Pipeline] sh 00:00:47.997 + logger -p user.info -t JENKINS-CI 00:00:48.010 [Pipeline] sh 00:00:48.294 + cat autorun-spdk.conf 00:00:48.294 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.294 SPDK_TEST_NVMF=1 00:00:48.294 SPDK_TEST_NVME_CLI=1 00:00:48.294 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.294 SPDK_TEST_NVMF_NICS=e810 00:00:48.294 SPDK_RUN_UBSAN=1 00:00:48.294 NET_TYPE=phy 00:00:48.302 RUN_NIGHTLY=1 00:00:48.306 [Pipeline] readFile 00:00:48.328 [Pipeline] withEnv 00:00:48.330 [Pipeline] { 00:00:48.343 [Pipeline] sh 00:00:48.631 + set -ex 00:00:48.631 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:48.631 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.631 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.631 ++ SPDK_TEST_NVMF=1 00:00:48.631 ++ SPDK_TEST_NVME_CLI=1 00:00:48.631 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.631 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.631 ++ SPDK_RUN_UBSAN=1 00:00:48.631 ++ NET_TYPE=phy 00:00:48.631 ++ RUN_NIGHTLY=1 00:00:48.631 + case $SPDK_TEST_NVMF_NICS in 00:00:48.631 + DRIVERS=ice 00:00:48.631 + [[ tcp == \r\d\m\a ]] 00:00:48.631 + [[ -n ice ]] 00:00:48.631 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:48.631 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:48.631 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:48.631 rmmod: ERROR: Module irdma is not currently loaded 00:00:48.631 rmmod: ERROR: Module i40iw is not currently loaded 00:00:48.631 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:48.631 + true 00:00:48.631 + for D in $DRIVERS 00:00:48.631 + sudo modprobe ice 00:00:48.631 + exit 0 00:00:48.641 [Pipeline] } 00:00:48.659 [Pipeline] // withEnv 00:00:48.664 [Pipeline] } 00:00:48.680 [Pipeline] // stage 00:00:48.690 [Pipeline] catchError 00:00:48.691 [Pipeline] { 00:00:48.706 [Pipeline] timeout 00:00:48.706 Timeout set to expire in 50 min 00:00:48.707 [Pipeline] { 00:00:48.717 [Pipeline] stage 00:00:48.718 [Pipeline] { (Tests) 00:00:48.729 [Pipeline] sh 00:00:49.013 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.013 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.013 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.013 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:49.013 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.013 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:49.013 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:49.013 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:49.013 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:49.013 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:49.013 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:49.013 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.013 + source /etc/os-release 00:00:49.013 ++ NAME='Fedora Linux' 00:00:49.013 ++ VERSION='38 (Cloud Edition)' 00:00:49.013 ++ ID=fedora 00:00:49.013 ++ VERSION_ID=38 00:00:49.013 ++ VERSION_CODENAME= 00:00:49.013 ++ PLATFORM_ID=platform:f38 00:00:49.013 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:49.013 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:49.013 ++ LOGO=fedora-logo-icon 00:00:49.013 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:49.013 ++ HOME_URL=https://fedoraproject.org/ 00:00:49.013 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:49.013 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:49.013 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:49.013 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:49.013 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:49.013 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:49.013 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:49.013 ++ SUPPORT_END=2024-05-14 00:00:49.013 ++ VARIANT='Cloud Edition' 00:00:49.013 ++ VARIANT_ID=cloud 00:00:49.013 + uname -a 00:00:49.013 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:49.013 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:52.314 Hugepages 00:00:52.314 node hugesize free / total 00:00:52.314 node0 1048576kB 0 / 0 00:00:52.314 node0 2048kB 0 / 0 00:00:52.314 node1 1048576kB 0 / 0 00:00:52.314 node1 2048kB 0 / 0 00:00:52.314 00:00:52.314 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:52.314 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:52.314 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:52.314 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:52.314 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:52.314 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:52.314 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:52.314 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:52.314 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:52.314 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:52.314 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:52.314 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:52.314 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:52.314 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:52.314 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:52.314 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:52.314 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:52.314 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:52.314 + rm -f /tmp/spdk-ld-path 00:00:52.314 + source autorun-spdk.conf 00:00:52.314 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.314 ++ SPDK_TEST_NVMF=1 00:00:52.314 ++ SPDK_TEST_NVME_CLI=1 00:00:52.314 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.314 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.314 ++ SPDK_RUN_UBSAN=1 00:00:52.314 ++ NET_TYPE=phy 00:00:52.314 ++ RUN_NIGHTLY=1 00:00:52.314 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:52.314 + [[ -n '' ]] 00:00:52.314 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.314 + for M in /var/spdk/build-*-manifest.txt 00:00:52.314 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:52.314 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.314 + for M in /var/spdk/build-*-manifest.txt 00:00:52.314 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:52.314 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.314 ++ uname 00:00:52.314 + [[ Linux == \L\i\n\u\x ]] 00:00:52.314 + sudo dmesg -T 00:00:52.314 + sudo dmesg --clear 00:00:52.314 + dmesg_pid=2232612 00:00:52.314 + [[ Fedora Linux == FreeBSD ]] 00:00:52.314 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.314 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.314 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:52.314 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:52.314 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:52.314 + [[ -x /usr/src/fio-static/fio ]] 00:00:52.314 + export FIO_BIN=/usr/src/fio-static/fio 00:00:52.314 + FIO_BIN=/usr/src/fio-static/fio 00:00:52.314 + sudo dmesg -Tw 00:00:52.314 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:52.314 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:52.314 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:52.314 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.314 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.314 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:52.314 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.314 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.314 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.314 Test configuration: 00:00:52.314 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.314 SPDK_TEST_NVMF=1 00:00:52.314 SPDK_TEST_NVME_CLI=1 00:00:52.314 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.314 SPDK_TEST_NVMF_NICS=e810 00:00:52.314 SPDK_RUN_UBSAN=1 00:00:52.314 NET_TYPE=phy 00:00:52.314 RUN_NIGHTLY=1 08:39:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:52.314 08:39:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:52.314 08:39:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:52.314 08:39:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:52.314 08:39:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.314 08:39:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.314 08:39:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.314 08:39:14 -- paths/export.sh@5 -- $ export PATH 00:00:52.314 08:39:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.315 08:39:14 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:52.315 08:39:14 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:52.315 08:39:14 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717915154.XXXXXX 00:00:52.315 08:39:14 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717915154.rJqrJp 00:00:52.315 08:39:14 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:52.315 08:39:14 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:52.315 08:39:14 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:52.315 08:39:14 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:52.315 08:39:14 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:52.315 08:39:14 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:52.315 08:39:14 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:52.315 08:39:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.315 08:39:14 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:52.315 08:39:14 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:52.315 08:39:14 -- pm/common@17 -- $ local monitor 00:00:52.315 08:39:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.315 08:39:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.315 08:39:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.315 08:39:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.315 08:39:14 -- pm/common@21 -- $ date +%s 00:00:52.315 08:39:14 -- pm/common@25 -- $ sleep 1 00:00:52.315 08:39:14 -- pm/common@21 -- $ date +%s 00:00:52.315 08:39:14 -- pm/common@21 -- $ date +%s 00:00:52.315 08:39:14 -- pm/common@21 -- $ date +%s 00:00:52.315 08:39:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717915154 00:00:52.315 08:39:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717915154 00:00:52.315 08:39:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717915154 00:00:52.315 08:39:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717915154 00:00:52.315 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717915154_collect-vmstat.pm.log 00:00:52.315 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717915154_collect-cpu-temp.pm.log 00:00:52.315 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717915154_collect-cpu-load.pm.log 00:00:52.315 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717915154_collect-bmc-pm.bmc.pm.log 00:00:53.257 08:39:15 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:53.258 08:39:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:53.258 08:39:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:53.258 08:39:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.258 08:39:15 -- spdk/autobuild.sh@16 -- $ date -u 00:00:53.258 Sun Jun 9 06:39:15 AM UTC 2024 00:00:53.258 08:39:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:53.258 v24.09-pre-53-ge55c9a812 00:00:53.258 08:39:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:53.258 08:39:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:53.258 08:39:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:53.258 08:39:15 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:00:53.258 08:39:15 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:00:53.258 08:39:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.258 ************************************ 00:00:53.258 START TEST ubsan 00:00:53.258 ************************************ 00:00:53.258 08:39:15 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:00:53.258 using ubsan 00:00:53.258 00:00:53.258 real 0m0.000s 00:00:53.258 user 0m0.000s 00:00:53.258 sys 0m0.000s 00:00:53.258 08:39:15 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:00:53.258 08:39:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:53.258 ************************************ 00:00:53.258 END TEST ubsan 00:00:53.258 ************************************ 00:00:53.520 08:39:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:53.520 08:39:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:53.520 08:39:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:53.520 08:39:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:53.520 08:39:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:53.520 08:39:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:53.520 08:39:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:53.520 08:39:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:53.520 08:39:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:53.520 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:53.520 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:53.781 Using 'verbs' RDMA provider 00:01:09.645 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:21.886 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:21.886 Creating mk/config.mk...done. 00:01:21.886 Creating mk/cc.flags.mk...done. 00:01:21.886 Type 'make' to build. 00:01:21.886 08:39:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:21.886 08:39:43 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:21.886 08:39:43 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:21.886 08:39:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.886 ************************************ 00:01:21.886 START TEST make 00:01:21.886 ************************************ 00:01:21.886 08:39:43 make -- common/autotest_common.sh@1124 -- $ make -j144 00:01:21.886 make[1]: Nothing to be done for 'all'. 00:01:30.028 The Meson build system 00:01:30.028 Version: 1.3.1 00:01:30.028 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:30.028 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:30.028 Build type: native build 00:01:30.028 Program cat found: YES (/usr/bin/cat) 00:01:30.028 Project name: DPDK 00:01:30.028 Project version: 24.03.0 00:01:30.028 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:30.028 C linker for the host machine: cc ld.bfd 2.39-16 00:01:30.028 Host machine cpu family: x86_64 00:01:30.028 Host machine cpu: x86_64 00:01:30.028 Message: ## Building in Developer Mode ## 00:01:30.028 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:30.028 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:30.028 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:30.028 Program python3 found: YES (/usr/bin/python3) 00:01:30.028 Program cat found: YES (/usr/bin/cat) 00:01:30.028 Compiler for C supports arguments -march=native: YES 00:01:30.028 Checking for size of "void *" : 8 00:01:30.028 Checking for size of "void *" : 8 (cached) 00:01:30.028 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:30.028 Library m found: YES 00:01:30.028 Library numa found: YES 00:01:30.028 Has header "numaif.h" : YES 00:01:30.028 Library fdt found: NO 00:01:30.028 Library execinfo found: NO 00:01:30.028 Has header "execinfo.h" : YES 00:01:30.028 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:30.028 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:30.028 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:30.028 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:30.028 Run-time dependency openssl found: YES 3.0.9 00:01:30.028 Run-time dependency libpcap found: YES 1.10.4 00:01:30.028 Has header "pcap.h" with dependency libpcap: YES 00:01:30.028 Compiler for C supports arguments -Wcast-qual: YES 00:01:30.028 Compiler for C supports arguments -Wdeprecated: YES 00:01:30.028 Compiler for C supports arguments -Wformat: YES 00:01:30.028 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:30.028 Compiler for C supports arguments -Wformat-security: NO 00:01:30.028 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.028 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:30.028 Compiler for C supports arguments -Wnested-externs: YES 00:01:30.028 Compiler for C supports arguments -Wold-style-definition: YES 00:01:30.028 Compiler for C supports arguments -Wpointer-arith: YES 00:01:30.028 Compiler for C supports arguments -Wsign-compare: YES 00:01:30.028 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:30.028 Compiler for C supports arguments -Wundef: YES 00:01:30.028 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.028 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:30.029 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:30.029 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.029 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:30.029 Program objdump found: YES (/usr/bin/objdump) 00:01:30.029 Compiler for C supports arguments -mavx512f: YES 00:01:30.029 Checking if "AVX512 checking" compiles: YES 00:01:30.029 Fetching value of define "__SSE4_2__" : 1 00:01:30.029 Fetching value of define "__AES__" : 1 00:01:30.029 Fetching value of define "__AVX__" : 1 00:01:30.029 Fetching value of define "__AVX2__" : 1 00:01:30.029 Fetching value of define "__AVX512BW__" : 1 00:01:30.029 Fetching value of define "__AVX512CD__" : 1 00:01:30.029 Fetching value of define "__AVX512DQ__" : 1 00:01:30.029 Fetching value of define "__AVX512F__" : 1 00:01:30.029 Fetching value of define "__AVX512VL__" : 1 00:01:30.029 Fetching value of define "__PCLMUL__" : 1 00:01:30.029 Fetching value of define "__RDRND__" : 1 00:01:30.029 Fetching value of define "__RDSEED__" : 1 00:01:30.029 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:30.029 Fetching value of define "__znver1__" : (undefined) 00:01:30.029 Fetching value of define "__znver2__" : (undefined) 00:01:30.029 Fetching value of define "__znver3__" : (undefined) 00:01:30.029 Fetching value of define "__znver4__" : (undefined) 00:01:30.029 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:30.029 Message: lib/log: Defining dependency "log" 00:01:30.029 Message: lib/kvargs: Defining dependency "kvargs" 00:01:30.029 Message: lib/telemetry: Defining dependency "telemetry" 00:01:30.029 Checking for function "getentropy" : NO 00:01:30.029 Message: lib/eal: Defining dependency "eal" 00:01:30.029 Message: lib/ring: Defining dependency "ring" 00:01:30.029 Message: lib/rcu: Defining dependency "rcu" 00:01:30.029 Message: lib/mempool: Defining dependency "mempool" 00:01:30.029 Message: lib/mbuf: Defining dependency "mbuf" 00:01:30.029 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:30.029 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:30.029 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:30.029 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:30.029 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:30.029 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:30.029 Compiler for C supports arguments -mpclmul: YES 00:01:30.029 Compiler for C supports arguments -maes: YES 00:01:30.029 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.029 Compiler for C supports arguments -mavx512bw: YES 00:01:30.029 Compiler for C supports arguments -mavx512dq: YES 00:01:30.029 Compiler for C supports arguments -mavx512vl: YES 00:01:30.029 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:30.029 Compiler for C supports arguments -mavx2: YES 00:01:30.029 Compiler for C supports arguments -mavx: YES 00:01:30.029 Message: lib/net: Defining dependency "net" 00:01:30.029 Message: lib/meter: Defining dependency "meter" 00:01:30.029 Message: lib/ethdev: Defining dependency "ethdev" 00:01:30.029 Message: lib/pci: Defining dependency "pci" 00:01:30.029 Message: lib/cmdline: Defining dependency "cmdline" 00:01:30.029 Message: lib/hash: Defining dependency "hash" 00:01:30.029 Message: lib/timer: Defining dependency "timer" 00:01:30.029 Message: lib/compressdev: Defining dependency "compressdev" 00:01:30.029 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:30.029 Message: lib/dmadev: Defining dependency "dmadev" 00:01:30.029 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:30.029 Message: lib/power: Defining dependency "power" 00:01:30.029 Message: lib/reorder: Defining dependency "reorder" 00:01:30.029 Message: lib/security: Defining dependency "security" 00:01:30.029 Has header "linux/userfaultfd.h" : YES 00:01:30.029 Has header "linux/vduse.h" : YES 00:01:30.029 Message: lib/vhost: Defining dependency "vhost" 00:01:30.029 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:30.029 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:30.029 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:30.029 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:30.029 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:30.029 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:30.029 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:30.029 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:30.029 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:30.029 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:30.029 Program doxygen found: YES (/usr/bin/doxygen) 00:01:30.029 Configuring doxy-api-html.conf using configuration 00:01:30.029 Configuring doxy-api-man.conf using configuration 00:01:30.029 Program mandb found: YES (/usr/bin/mandb) 00:01:30.029 Program sphinx-build found: NO 00:01:30.029 Configuring rte_build_config.h using configuration 00:01:30.029 Message: 00:01:30.029 ================= 00:01:30.029 Applications Enabled 00:01:30.029 ================= 00:01:30.029 00:01:30.029 apps: 00:01:30.029 00:01:30.029 00:01:30.029 Message: 00:01:30.029 ================= 00:01:30.029 Libraries Enabled 00:01:30.029 ================= 00:01:30.029 00:01:30.029 libs: 00:01:30.029 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:30.029 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:30.029 cryptodev, dmadev, power, reorder, security, vhost, 00:01:30.029 00:01:30.029 Message: 00:01:30.029 =============== 00:01:30.029 Drivers Enabled 00:01:30.029 =============== 00:01:30.029 00:01:30.029 common: 00:01:30.029 00:01:30.029 bus: 00:01:30.029 pci, vdev, 00:01:30.029 mempool: 00:01:30.029 ring, 00:01:30.029 dma: 00:01:30.029 00:01:30.029 net: 00:01:30.029 00:01:30.029 crypto: 00:01:30.029 00:01:30.029 compress: 00:01:30.029 00:01:30.029 vdpa: 00:01:30.029 00:01:30.029 00:01:30.029 Message: 00:01:30.029 ================= 00:01:30.029 Content Skipped 00:01:30.029 ================= 00:01:30.029 00:01:30.029 apps: 00:01:30.029 dumpcap: explicitly disabled via build config 00:01:30.029 graph: explicitly disabled via build config 00:01:30.029 pdump: explicitly disabled via build config 00:01:30.029 proc-info: explicitly disabled via build config 00:01:30.029 test-acl: explicitly disabled via build config 00:01:30.029 test-bbdev: explicitly disabled via build config 00:01:30.029 test-cmdline: explicitly disabled via build config 00:01:30.029 test-compress-perf: explicitly disabled via build config 00:01:30.029 test-crypto-perf: explicitly disabled via build config 00:01:30.029 test-dma-perf: explicitly disabled via build config 00:01:30.029 test-eventdev: explicitly disabled via build config 00:01:30.029 test-fib: explicitly disabled via build config 00:01:30.029 test-flow-perf: explicitly disabled via build config 00:01:30.029 test-gpudev: explicitly disabled via build config 00:01:30.029 test-mldev: explicitly disabled via build config 00:01:30.029 test-pipeline: explicitly disabled via build config 00:01:30.029 test-pmd: explicitly disabled via build config 00:01:30.029 test-regex: explicitly disabled via build config 00:01:30.029 test-sad: explicitly disabled via build config 00:01:30.029 test-security-perf: explicitly disabled via build config 00:01:30.029 00:01:30.029 libs: 00:01:30.029 argparse: explicitly disabled via build config 00:01:30.029 metrics: explicitly disabled via build config 00:01:30.029 acl: explicitly disabled via build config 00:01:30.029 bbdev: explicitly disabled via build config 00:01:30.029 bitratestats: explicitly disabled via build config 00:01:30.029 bpf: explicitly disabled via build config 00:01:30.029 cfgfile: explicitly disabled via build config 00:01:30.029 distributor: explicitly disabled via build config 00:01:30.029 efd: explicitly disabled via build config 00:01:30.029 eventdev: explicitly disabled via build config 00:01:30.029 dispatcher: explicitly disabled via build config 00:01:30.029 gpudev: explicitly disabled via build config 00:01:30.029 gro: explicitly disabled via build config 00:01:30.029 gso: explicitly disabled via build config 00:01:30.029 ip_frag: explicitly disabled via build config 00:01:30.029 jobstats: explicitly disabled via build config 00:01:30.029 latencystats: explicitly disabled via build config 00:01:30.029 lpm: explicitly disabled via build config 00:01:30.029 member: explicitly disabled via build config 00:01:30.029 pcapng: explicitly disabled via build config 00:01:30.029 rawdev: explicitly disabled via build config 00:01:30.029 regexdev: explicitly disabled via build config 00:01:30.029 mldev: explicitly disabled via build config 00:01:30.029 rib: explicitly disabled via build config 00:01:30.029 sched: explicitly disabled via build config 00:01:30.029 stack: explicitly disabled via build config 00:01:30.029 ipsec: explicitly disabled via build config 00:01:30.029 pdcp: explicitly disabled via build config 00:01:30.029 fib: explicitly disabled via build config 00:01:30.029 port: explicitly disabled via build config 00:01:30.029 pdump: explicitly disabled via build config 00:01:30.029 table: explicitly disabled via build config 00:01:30.029 pipeline: explicitly disabled via build config 00:01:30.029 graph: explicitly disabled via build config 00:01:30.029 node: explicitly disabled via build config 00:01:30.029 00:01:30.029 drivers: 00:01:30.029 common/cpt: not in enabled drivers build config 00:01:30.029 common/dpaax: not in enabled drivers build config 00:01:30.029 common/iavf: not in enabled drivers build config 00:01:30.029 common/idpf: not in enabled drivers build config 00:01:30.029 common/ionic: not in enabled drivers build config 00:01:30.029 common/mvep: not in enabled drivers build config 00:01:30.029 common/octeontx: not in enabled drivers build config 00:01:30.029 bus/auxiliary: not in enabled drivers build config 00:01:30.029 bus/cdx: not in enabled drivers build config 00:01:30.029 bus/dpaa: not in enabled drivers build config 00:01:30.029 bus/fslmc: not in enabled drivers build config 00:01:30.029 bus/ifpga: not in enabled drivers build config 00:01:30.029 bus/platform: not in enabled drivers build config 00:01:30.029 bus/uacce: not in enabled drivers build config 00:01:30.029 bus/vmbus: not in enabled drivers build config 00:01:30.029 common/cnxk: not in enabled drivers build config 00:01:30.029 common/mlx5: not in enabled drivers build config 00:01:30.029 common/nfp: not in enabled drivers build config 00:01:30.030 common/nitrox: not in enabled drivers build config 00:01:30.030 common/qat: not in enabled drivers build config 00:01:30.030 common/sfc_efx: not in enabled drivers build config 00:01:30.030 mempool/bucket: not in enabled drivers build config 00:01:30.030 mempool/cnxk: not in enabled drivers build config 00:01:30.030 mempool/dpaa: not in enabled drivers build config 00:01:30.030 mempool/dpaa2: not in enabled drivers build config 00:01:30.030 mempool/octeontx: not in enabled drivers build config 00:01:30.030 mempool/stack: not in enabled drivers build config 00:01:30.030 dma/cnxk: not in enabled drivers build config 00:01:30.030 dma/dpaa: not in enabled drivers build config 00:01:30.030 dma/dpaa2: not in enabled drivers build config 00:01:30.030 dma/hisilicon: not in enabled drivers build config 00:01:30.030 dma/idxd: not in enabled drivers build config 00:01:30.030 dma/ioat: not in enabled drivers build config 00:01:30.030 dma/skeleton: not in enabled drivers build config 00:01:30.030 net/af_packet: not in enabled drivers build config 00:01:30.030 net/af_xdp: not in enabled drivers build config 00:01:30.030 net/ark: not in enabled drivers build config 00:01:30.030 net/atlantic: not in enabled drivers build config 00:01:30.030 net/avp: not in enabled drivers build config 00:01:30.030 net/axgbe: not in enabled drivers build config 00:01:30.030 net/bnx2x: not in enabled drivers build config 00:01:30.030 net/bnxt: not in enabled drivers build config 00:01:30.030 net/bonding: not in enabled drivers build config 00:01:30.030 net/cnxk: not in enabled drivers build config 00:01:30.030 net/cpfl: not in enabled drivers build config 00:01:30.030 net/cxgbe: not in enabled drivers build config 00:01:30.030 net/dpaa: not in enabled drivers build config 00:01:30.030 net/dpaa2: not in enabled drivers build config 00:01:30.030 net/e1000: not in enabled drivers build config 00:01:30.030 net/ena: not in enabled drivers build config 00:01:30.030 net/enetc: not in enabled drivers build config 00:01:30.030 net/enetfec: not in enabled drivers build config 00:01:30.030 net/enic: not in enabled drivers build config 00:01:30.030 net/failsafe: not in enabled drivers build config 00:01:30.030 net/fm10k: not in enabled drivers build config 00:01:30.030 net/gve: not in enabled drivers build config 00:01:30.030 net/hinic: not in enabled drivers build config 00:01:30.030 net/hns3: not in enabled drivers build config 00:01:30.030 net/i40e: not in enabled drivers build config 00:01:30.030 net/iavf: not in enabled drivers build config 00:01:30.030 net/ice: not in enabled drivers build config 00:01:30.030 net/idpf: not in enabled drivers build config 00:01:30.030 net/igc: not in enabled drivers build config 00:01:30.030 net/ionic: not in enabled drivers build config 00:01:30.030 net/ipn3ke: not in enabled drivers build config 00:01:30.030 net/ixgbe: not in enabled drivers build config 00:01:30.030 net/mana: not in enabled drivers build config 00:01:30.030 net/memif: not in enabled drivers build config 00:01:30.030 net/mlx4: not in enabled drivers build config 00:01:30.030 net/mlx5: not in enabled drivers build config 00:01:30.030 net/mvneta: not in enabled drivers build config 00:01:30.030 net/mvpp2: not in enabled drivers build config 00:01:30.030 net/netvsc: not in enabled drivers build config 00:01:30.030 net/nfb: not in enabled drivers build config 00:01:30.030 net/nfp: not in enabled drivers build config 00:01:30.030 net/ngbe: not in enabled drivers build config 00:01:30.030 net/null: not in enabled drivers build config 00:01:30.030 net/octeontx: not in enabled drivers build config 00:01:30.030 net/octeon_ep: not in enabled drivers build config 00:01:30.030 net/pcap: not in enabled drivers build config 00:01:30.030 net/pfe: not in enabled drivers build config 00:01:30.030 net/qede: not in enabled drivers build config 00:01:30.030 net/ring: not in enabled drivers build config 00:01:30.030 net/sfc: not in enabled drivers build config 00:01:30.030 net/softnic: not in enabled drivers build config 00:01:30.030 net/tap: not in enabled drivers build config 00:01:30.030 net/thunderx: not in enabled drivers build config 00:01:30.030 net/txgbe: not in enabled drivers build config 00:01:30.030 net/vdev_netvsc: not in enabled drivers build config 00:01:30.030 net/vhost: not in enabled drivers build config 00:01:30.030 net/virtio: not in enabled drivers build config 00:01:30.030 net/vmxnet3: not in enabled drivers build config 00:01:30.030 raw/*: missing internal dependency, "rawdev" 00:01:30.030 crypto/armv8: not in enabled drivers build config 00:01:30.030 crypto/bcmfs: not in enabled drivers build config 00:01:30.030 crypto/caam_jr: not in enabled drivers build config 00:01:30.030 crypto/ccp: not in enabled drivers build config 00:01:30.030 crypto/cnxk: not in enabled drivers build config 00:01:30.030 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.030 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.030 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.030 crypto/mlx5: not in enabled drivers build config 00:01:30.030 crypto/mvsam: not in enabled drivers build config 00:01:30.030 crypto/nitrox: not in enabled drivers build config 00:01:30.030 crypto/null: not in enabled drivers build config 00:01:30.030 crypto/octeontx: not in enabled drivers build config 00:01:30.030 crypto/openssl: not in enabled drivers build config 00:01:30.030 crypto/scheduler: not in enabled drivers build config 00:01:30.030 crypto/uadk: not in enabled drivers build config 00:01:30.030 crypto/virtio: not in enabled drivers build config 00:01:30.030 compress/isal: not in enabled drivers build config 00:01:30.030 compress/mlx5: not in enabled drivers build config 00:01:30.030 compress/nitrox: not in enabled drivers build config 00:01:30.030 compress/octeontx: not in enabled drivers build config 00:01:30.030 compress/zlib: not in enabled drivers build config 00:01:30.030 regex/*: missing internal dependency, "regexdev" 00:01:30.030 ml/*: missing internal dependency, "mldev" 00:01:30.030 vdpa/ifc: not in enabled drivers build config 00:01:30.030 vdpa/mlx5: not in enabled drivers build config 00:01:30.030 vdpa/nfp: not in enabled drivers build config 00:01:30.030 vdpa/sfc: not in enabled drivers build config 00:01:30.030 event/*: missing internal dependency, "eventdev" 00:01:30.030 baseband/*: missing internal dependency, "bbdev" 00:01:30.030 gpu/*: missing internal dependency, "gpudev" 00:01:30.030 00:01:30.030 00:01:30.291 Build targets in project: 84 00:01:30.291 00:01:30.291 DPDK 24.03.0 00:01:30.291 00:01:30.291 User defined options 00:01:30.291 buildtype : debug 00:01:30.291 default_library : shared 00:01:30.291 libdir : lib 00:01:30.291 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.291 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:30.291 c_link_args : 00:01:30.291 cpu_instruction_set: native 00:01:30.291 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:30.291 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:30.291 enable_docs : false 00:01:30.291 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:30.291 enable_kmods : false 00:01:30.291 tests : false 00:01:30.291 00:01:30.291 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.561 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:30.829 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:30.829 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.829 [3/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.829 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:30.829 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.829 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:30.829 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:30.829 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:30.829 [9/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.829 [10/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:30.829 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.829 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.829 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:30.829 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.829 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:30.829 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:30.829 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:30.829 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:30.829 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:30.829 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:30.829 [21/267] Linking static target lib/librte_kvargs.a 00:01:30.829 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:30.829 [23/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:30.829 [24/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:30.829 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:30.829 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:30.829 [27/267] Linking static target lib/librte_pci.a 00:01:30.829 [28/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:30.829 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.829 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.829 [31/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.829 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.087 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.087 [34/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.087 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.087 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.087 [37/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:31.087 [38/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.087 [39/267] Linking static target lib/librte_log.a 00:01:31.087 [40/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.087 [41/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.087 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.087 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.087 [44/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.087 [45/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.087 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.087 [47/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.087 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.087 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.087 [50/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.087 [51/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.087 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.087 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.087 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.087 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.087 [56/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.088 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.088 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.088 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.088 [60/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.088 [61/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.088 [62/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.088 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.088 [64/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.088 [65/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:31.088 [66/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:31.088 [67/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.088 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.088 [69/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.088 [70/267] Linking static target lib/librte_ring.a 00:01:31.088 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.088 [72/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:31.088 [73/267] Linking static target lib/librte_meter.a 00:01:31.088 [74/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.088 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.088 [76/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.088 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.088 [78/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:31.088 [79/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:31.088 [80/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:31.088 [81/267] Linking static target lib/librte_rcu.a 00:01:31.088 [82/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:31.088 [83/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:31.088 [84/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.088 [85/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:31.346 [86/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.346 [87/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.346 [88/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:31.346 [89/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.346 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.346 [91/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:31.346 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.347 [93/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:31.347 [94/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:31.347 [95/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:31.347 [96/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:31.347 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.347 [98/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:31.347 [99/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:31.347 [100/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:31.347 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.347 [102/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.347 [103/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:31.347 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:31.347 [105/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.347 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:31.347 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.347 [108/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:31.347 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.347 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.347 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:31.347 [112/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.347 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:31.347 [114/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:31.347 [115/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:31.347 [116/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:31.347 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:31.347 [118/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.347 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.347 [120/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:31.347 [121/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:31.347 [122/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:31.347 [123/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:31.347 [124/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.347 [125/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:31.347 [126/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.347 [127/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.347 [128/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:31.347 [129/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.347 [130/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:31.347 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:31.347 [132/267] Linking static target lib/librte_telemetry.a 00:01:31.347 [133/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.347 [134/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.347 [135/267] Linking static target lib/librte_cmdline.a 00:01:31.347 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.347 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.347 [138/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.347 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.347 [140/267] Linking static target drivers/librte_bus_vdev.a 00:01:31.608 [141/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.608 [142/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.608 [143/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.608 [144/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.608 [145/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.608 [146/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.608 [147/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.608 [148/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:31.608 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:31.608 [150/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.608 [151/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:31.608 [152/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.608 [153/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.608 [154/267] Linking static target lib/librte_dmadev.a 00:01:31.608 [155/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.608 [156/267] Linking static target lib/librte_timer.a 00:01:31.608 [157/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.608 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.608 [159/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.608 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:31.608 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:31.608 [162/267] Linking static target lib/librte_mempool.a 00:01:31.608 [163/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:31.608 [164/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.608 [165/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.608 [166/267] Linking static target lib/librte_net.a 00:01:31.608 [167/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:31.608 [168/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:31.608 [169/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:31.608 [170/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.608 [171/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.608 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:31.608 [173/267] Linking static target lib/librte_power.a 00:01:31.608 [174/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:31.608 [175/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:31.608 [176/267] Linking static target lib/librte_compressdev.a 00:01:31.608 [177/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:31.608 [178/267] Linking static target lib/librte_eal.a 00:01:31.608 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:31.608 [180/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:31.608 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:31.608 [182/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:31.608 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:31.608 [184/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:31.608 [185/267] Linking static target drivers/librte_mempool_ring.a 00:01:31.608 [186/267] Linking static target lib/librte_reorder.a 00:01:31.608 [187/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.608 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:31.608 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:31.608 [190/267] Linking target lib/librte_log.so.24.1 00:01:31.608 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:31.608 [192/267] Linking static target lib/librte_security.a 00:01:31.608 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:31.870 [194/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:31.870 [195/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:31.870 [196/267] Linking static target lib/librte_mbuf.a 00:01:31.870 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:31.870 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:31.870 [199/267] Linking static target drivers/librte_bus_pci.a 00:01:31.870 [200/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:31.870 [201/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:31.870 [202/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.870 [203/267] Linking static target lib/librte_hash.a 00:01:31.870 [204/267] Linking target lib/librte_kvargs.so.24.1 00:01:31.870 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.870 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:31.870 [207/267] Linking static target lib/librte_cryptodev.a 00:01:31.870 [208/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:32.131 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.131 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.131 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.131 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.131 [213/267] Linking target lib/librte_telemetry.so.24.1 00:01:32.131 [214/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.131 [215/267] Linking static target lib/librte_ethdev.a 00:01:32.131 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.391 [217/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:32.392 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.392 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.392 [220/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.653 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.653 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.653 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.653 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.653 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.914 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.488 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:33.488 [228/267] Linking static target lib/librte_vhost.a 00:01:34.062 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.467 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.064 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.006 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.006 [233/267] Linking target lib/librte_eal.so.24.1 00:01:43.267 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:43.267 [235/267] Linking target lib/librte_dmadev.so.24.1 00:01:43.267 [236/267] Linking target lib/librte_ring.so.24.1 00:01:43.267 [237/267] Linking target lib/librte_meter.so.24.1 00:01:43.267 [238/267] Linking target lib/librte_pci.so.24.1 00:01:43.267 [239/267] Linking target lib/librte_timer.so.24.1 00:01:43.267 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:43.529 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:43.529 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:43.529 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:43.529 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:43.529 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:43.529 [246/267] Linking target lib/librte_mempool.so.24.1 00:01:43.529 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:43.529 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:43.529 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:43.529 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:43.529 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:43.529 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:43.790 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:43.790 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:01:43.790 [255/267] Linking target lib/librte_reorder.so.24.1 00:01:43.790 [256/267] Linking target lib/librte_compressdev.so.24.1 00:01:43.790 [257/267] Linking target lib/librte_net.so.24.1 00:01:43.790 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:44.052 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:44.052 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:44.052 [261/267] Linking target lib/librte_hash.so.24.1 00:01:44.052 [262/267] Linking target lib/librte_security.so.24.1 00:01:44.052 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:44.052 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:44.052 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:44.313 [266/267] Linking target lib/librte_power.so.24.1 00:01:44.313 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:44.313 INFO: autodetecting backend as ninja 00:01:44.313 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:45.256 CC lib/ut_mock/mock.o 00:01:45.256 CC lib/ut/ut.o 00:01:45.256 CC lib/log/log.o 00:01:45.256 CC lib/log/log_flags.o 00:01:45.256 CC lib/log/log_deprecated.o 00:01:45.517 LIB libspdk_ut_mock.a 00:01:45.517 LIB libspdk_ut.a 00:01:45.517 LIB libspdk_log.a 00:01:45.517 SO libspdk_ut_mock.so.6.0 00:01:45.517 SO libspdk_ut.so.2.0 00:01:45.517 SO libspdk_log.so.7.0 00:01:45.517 SYMLINK libspdk_ut_mock.so 00:01:45.517 SYMLINK libspdk_ut.so 00:01:45.517 SYMLINK libspdk_log.so 00:01:46.090 CC lib/ioat/ioat.o 00:01:46.090 CC lib/util/base64.o 00:01:46.090 CC lib/util/bit_array.o 00:01:46.090 CC lib/dma/dma.o 00:01:46.090 CC lib/util/cpuset.o 00:01:46.090 CC lib/util/crc16.o 00:01:46.090 CC lib/util/crc32.o 00:01:46.090 CC lib/util/crc32_ieee.o 00:01:46.090 CXX lib/trace_parser/trace.o 00:01:46.090 CC lib/util/crc32c.o 00:01:46.090 CC lib/util/crc64.o 00:01:46.090 CC lib/util/fd.o 00:01:46.090 CC lib/util/dif.o 00:01:46.090 CC lib/util/hexlify.o 00:01:46.090 CC lib/util/file.o 00:01:46.090 CC lib/util/iov.o 00:01:46.090 CC lib/util/math.o 00:01:46.090 CC lib/util/pipe.o 00:01:46.090 CC lib/util/strerror_tls.o 00:01:46.090 CC lib/util/string.o 00:01:46.090 CC lib/util/uuid.o 00:01:46.090 CC lib/util/fd_group.o 00:01:46.090 CC lib/util/xor.o 00:01:46.090 CC lib/util/zipf.o 00:01:46.090 CC lib/vfio_user/host/vfio_user.o 00:01:46.090 CC lib/vfio_user/host/vfio_user_pci.o 00:01:46.090 LIB libspdk_dma.a 00:01:46.090 SO libspdk_dma.so.4.0 00:01:46.090 LIB libspdk_ioat.a 00:01:46.352 SO libspdk_ioat.so.7.0 00:01:46.352 SYMLINK libspdk_dma.so 00:01:46.352 SYMLINK libspdk_ioat.so 00:01:46.352 LIB libspdk_vfio_user.a 00:01:46.352 SO libspdk_vfio_user.so.5.0 00:01:46.352 LIB libspdk_util.a 00:01:46.352 SYMLINK libspdk_vfio_user.so 00:01:46.614 SO libspdk_util.so.9.0 00:01:46.614 SYMLINK libspdk_util.so 00:01:46.876 LIB libspdk_trace_parser.a 00:01:46.876 SO libspdk_trace_parser.so.5.0 00:01:46.876 SYMLINK libspdk_trace_parser.so 00:01:46.876 CC lib/conf/conf.o 00:01:46.876 CC lib/json/json_util.o 00:01:46.876 CC lib/json/json_parse.o 00:01:46.876 CC lib/json/json_write.o 00:01:46.876 CC lib/env_dpdk/env.o 00:01:46.876 CC lib/vmd/vmd.o 00:01:46.876 CC lib/vmd/led.o 00:01:46.876 CC lib/env_dpdk/memory.o 00:01:46.876 CC lib/env_dpdk/pci.o 00:01:46.876 CC lib/rdma/common.o 00:01:47.137 CC lib/idxd/idxd.o 00:01:47.137 CC lib/env_dpdk/init.o 00:01:47.137 CC lib/rdma/rdma_verbs.o 00:01:47.137 CC lib/idxd/idxd_user.o 00:01:47.137 CC lib/env_dpdk/threads.o 00:01:47.137 CC lib/idxd/idxd_kernel.o 00:01:47.137 CC lib/env_dpdk/pci_ioat.o 00:01:47.137 CC lib/env_dpdk/pci_virtio.o 00:01:47.137 CC lib/env_dpdk/pci_vmd.o 00:01:47.137 CC lib/env_dpdk/pci_idxd.o 00:01:47.137 CC lib/env_dpdk/pci_event.o 00:01:47.137 CC lib/env_dpdk/sigbus_handler.o 00:01:47.137 CC lib/env_dpdk/pci_dpdk.o 00:01:47.137 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:47.137 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:47.137 LIB libspdk_conf.a 00:01:47.137 SO libspdk_conf.so.6.0 00:01:47.398 LIB libspdk_rdma.a 00:01:47.398 LIB libspdk_json.a 00:01:47.398 SO libspdk_rdma.so.6.0 00:01:47.398 SYMLINK libspdk_conf.so 00:01:47.398 SO libspdk_json.so.6.0 00:01:47.398 SYMLINK libspdk_rdma.so 00:01:47.398 SYMLINK libspdk_json.so 00:01:47.398 LIB libspdk_idxd.a 00:01:47.660 SO libspdk_idxd.so.12.0 00:01:47.660 LIB libspdk_vmd.a 00:01:47.660 SO libspdk_vmd.so.6.0 00:01:47.660 SYMLINK libspdk_idxd.so 00:01:47.660 SYMLINK libspdk_vmd.so 00:01:47.660 CC lib/jsonrpc/jsonrpc_server.o 00:01:47.660 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:47.660 CC lib/jsonrpc/jsonrpc_client.o 00:01:47.660 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:47.922 LIB libspdk_jsonrpc.a 00:01:48.184 SO libspdk_jsonrpc.so.6.0 00:01:48.184 SYMLINK libspdk_jsonrpc.so 00:01:48.184 LIB libspdk_env_dpdk.a 00:01:48.184 SO libspdk_env_dpdk.so.14.1 00:01:48.445 SYMLINK libspdk_env_dpdk.so 00:01:48.445 CC lib/rpc/rpc.o 00:01:48.706 LIB libspdk_rpc.a 00:01:48.706 SO libspdk_rpc.so.6.0 00:01:48.706 SYMLINK libspdk_rpc.so 00:01:49.279 CC lib/notify/notify.o 00:01:49.279 CC lib/notify/notify_rpc.o 00:01:49.279 CC lib/trace/trace.o 00:01:49.279 CC lib/trace/trace_flags.o 00:01:49.279 CC lib/keyring/keyring.o 00:01:49.279 CC lib/trace/trace_rpc.o 00:01:49.279 CC lib/keyring/keyring_rpc.o 00:01:49.279 LIB libspdk_notify.a 00:01:49.279 SO libspdk_notify.so.6.0 00:01:49.279 LIB libspdk_keyring.a 00:01:49.279 LIB libspdk_trace.a 00:01:49.540 SYMLINK libspdk_notify.so 00:01:49.540 SO libspdk_keyring.so.1.0 00:01:49.540 SO libspdk_trace.so.10.0 00:01:49.540 SYMLINK libspdk_keyring.so 00:01:49.540 SYMLINK libspdk_trace.so 00:01:49.802 CC lib/sock/sock.o 00:01:49.802 CC lib/sock/sock_rpc.o 00:01:49.802 CC lib/thread/thread.o 00:01:49.802 CC lib/thread/iobuf.o 00:01:50.376 LIB libspdk_sock.a 00:01:50.376 SO libspdk_sock.so.9.0 00:01:50.376 SYMLINK libspdk_sock.so 00:01:50.638 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:50.638 CC lib/nvme/nvme_ctrlr.o 00:01:50.638 CC lib/nvme/nvme_fabric.o 00:01:50.638 CC lib/nvme/nvme_ns_cmd.o 00:01:50.638 CC lib/nvme/nvme_ns.o 00:01:50.638 CC lib/nvme/nvme_pcie_common.o 00:01:50.638 CC lib/nvme/nvme.o 00:01:50.638 CC lib/nvme/nvme_pcie.o 00:01:50.638 CC lib/nvme/nvme_qpair.o 00:01:50.638 CC lib/nvme/nvme_quirks.o 00:01:50.638 CC lib/nvme/nvme_transport.o 00:01:50.638 CC lib/nvme/nvme_discovery.o 00:01:50.638 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:50.638 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:50.638 CC lib/nvme/nvme_tcp.o 00:01:50.638 CC lib/nvme/nvme_opal.o 00:01:50.638 CC lib/nvme/nvme_io_msg.o 00:01:50.638 CC lib/nvme/nvme_poll_group.o 00:01:50.638 CC lib/nvme/nvme_stubs.o 00:01:50.638 CC lib/nvme/nvme_zns.o 00:01:50.638 CC lib/nvme/nvme_auth.o 00:01:50.638 CC lib/nvme/nvme_cuse.o 00:01:50.638 CC lib/nvme/nvme_rdma.o 00:01:51.212 LIB libspdk_thread.a 00:01:51.212 SO libspdk_thread.so.10.0 00:01:51.212 SYMLINK libspdk_thread.so 00:01:51.473 CC lib/accel/accel.o 00:01:51.473 CC lib/accel/accel_rpc.o 00:01:51.473 CC lib/accel/accel_sw.o 00:01:51.473 CC lib/blob/blobstore.o 00:01:51.473 CC lib/init/json_config.o 00:01:51.473 CC lib/blob/request.o 00:01:51.473 CC lib/blob/blob_bs_dev.o 00:01:51.473 CC lib/virtio/virtio.o 00:01:51.473 CC lib/init/subsystem.o 00:01:51.473 CC lib/blob/zeroes.o 00:01:51.473 CC lib/virtio/virtio_vhost_user.o 00:01:51.473 CC lib/init/subsystem_rpc.o 00:01:51.473 CC lib/virtio/virtio_vfio_user.o 00:01:51.473 CC lib/init/rpc.o 00:01:51.473 CC lib/virtio/virtio_pci.o 00:01:51.735 LIB libspdk_init.a 00:01:51.996 SO libspdk_init.so.5.0 00:01:51.996 LIB libspdk_virtio.a 00:01:51.996 SYMLINK libspdk_init.so 00:01:51.996 SO libspdk_virtio.so.7.0 00:01:51.996 SYMLINK libspdk_virtio.so 00:01:52.258 CC lib/event/app.o 00:01:52.258 CC lib/event/reactor.o 00:01:52.258 CC lib/event/log_rpc.o 00:01:52.258 CC lib/event/app_rpc.o 00:01:52.258 CC lib/event/scheduler_static.o 00:01:52.519 LIB libspdk_accel.a 00:01:52.519 LIB libspdk_nvme.a 00:01:52.519 SO libspdk_accel.so.15.0 00:01:52.519 SO libspdk_nvme.so.13.0 00:01:52.519 SYMLINK libspdk_accel.so 00:01:52.780 LIB libspdk_event.a 00:01:52.780 SO libspdk_event.so.13.1 00:01:52.780 SYMLINK libspdk_nvme.so 00:01:52.780 SYMLINK libspdk_event.so 00:01:53.041 CC lib/bdev/bdev.o 00:01:53.041 CC lib/bdev/bdev_rpc.o 00:01:53.041 CC lib/bdev/part.o 00:01:53.041 CC lib/bdev/bdev_zone.o 00:01:53.041 CC lib/bdev/scsi_nvme.o 00:01:54.027 LIB libspdk_blob.a 00:01:54.289 SO libspdk_blob.so.11.0 00:01:54.289 SYMLINK libspdk_blob.so 00:01:54.551 CC lib/lvol/lvol.o 00:01:54.551 CC lib/blobfs/blobfs.o 00:01:54.551 CC lib/blobfs/tree.o 00:01:55.124 LIB libspdk_bdev.a 00:01:55.124 SO libspdk_bdev.so.15.0 00:01:55.124 SYMLINK libspdk_bdev.so 00:01:55.385 LIB libspdk_blobfs.a 00:01:55.385 SO libspdk_blobfs.so.10.0 00:01:55.385 LIB libspdk_lvol.a 00:01:55.385 SO libspdk_lvol.so.10.0 00:01:55.385 SYMLINK libspdk_blobfs.so 00:01:55.385 SYMLINK libspdk_lvol.so 00:01:55.648 CC lib/nvmf/ctrlr.o 00:01:55.648 CC lib/ublk/ublk.o 00:01:55.648 CC lib/nvmf/ctrlr_discovery.o 00:01:55.648 CC lib/ublk/ublk_rpc.o 00:01:55.648 CC lib/nvmf/ctrlr_bdev.o 00:01:55.648 CC lib/nvmf/subsystem.o 00:01:55.648 CC lib/nbd/nbd.o 00:01:55.648 CC lib/nvmf/nvmf.o 00:01:55.648 CC lib/nbd/nbd_rpc.o 00:01:55.648 CC lib/nvmf/nvmf_rpc.o 00:01:55.648 CC lib/nvmf/transport.o 00:01:55.648 CC lib/nvmf/tcp.o 00:01:55.648 CC lib/nvmf/stubs.o 00:01:55.648 CC lib/nvmf/mdns_server.o 00:01:55.648 CC lib/nvmf/rdma.o 00:01:55.648 CC lib/nvmf/auth.o 00:01:55.648 CC lib/ftl/ftl_core.o 00:01:55.648 CC lib/scsi/dev.o 00:01:55.648 CC lib/ftl/ftl_init.o 00:01:55.648 CC lib/scsi/lun.o 00:01:55.648 CC lib/scsi/port.o 00:01:55.648 CC lib/ftl/ftl_layout.o 00:01:55.648 CC lib/scsi/scsi.o 00:01:55.648 CC lib/ftl/ftl_debug.o 00:01:55.648 CC lib/scsi/scsi_bdev.o 00:01:55.648 CC lib/ftl/ftl_io.o 00:01:55.648 CC lib/ftl/ftl_sb.o 00:01:55.648 CC lib/scsi/scsi_pr.o 00:01:55.648 CC lib/scsi/scsi_rpc.o 00:01:55.648 CC lib/ftl/ftl_l2p.o 00:01:55.648 CC lib/ftl/ftl_l2p_flat.o 00:01:55.648 CC lib/scsi/task.o 00:01:55.648 CC lib/ftl/ftl_band.o 00:01:55.648 CC lib/ftl/ftl_nv_cache.o 00:01:55.648 CC lib/ftl/ftl_band_ops.o 00:01:55.648 CC lib/ftl/ftl_writer.o 00:01:55.648 CC lib/ftl/ftl_rq.o 00:01:55.648 CC lib/ftl/ftl_reloc.o 00:01:55.648 CC lib/ftl/ftl_l2p_cache.o 00:01:55.648 CC lib/ftl/ftl_p2l.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:55.648 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:55.649 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:55.649 CC lib/ftl/utils/ftl_mempool.o 00:01:55.649 CC lib/ftl/utils/ftl_conf.o 00:01:55.649 CC lib/ftl/utils/ftl_md.o 00:01:55.649 CC lib/ftl/utils/ftl_bitmap.o 00:01:55.649 CC lib/ftl/utils/ftl_property.o 00:01:55.649 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:55.649 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:55.649 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:55.649 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:55.649 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:55.649 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:55.649 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:55.649 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:55.649 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:55.649 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:55.649 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:55.649 CC lib/ftl/base/ftl_base_dev.o 00:01:55.649 CC lib/ftl/base/ftl_base_bdev.o 00:01:55.649 CC lib/ftl/ftl_trace.o 00:01:55.910 LIB libspdk_nbd.a 00:01:55.910 SO libspdk_nbd.so.7.0 00:01:56.171 SYMLINK libspdk_nbd.so 00:01:56.171 LIB libspdk_scsi.a 00:01:56.171 SO libspdk_scsi.so.9.0 00:01:56.171 SYMLINK libspdk_scsi.so 00:01:56.171 LIB libspdk_ublk.a 00:01:56.433 SO libspdk_ublk.so.3.0 00:01:56.433 SYMLINK libspdk_ublk.so 00:01:56.433 CC lib/vhost/vhost.o 00:01:56.433 CC lib/vhost/vhost_rpc.o 00:01:56.433 CC lib/vhost/vhost_scsi.o 00:01:56.433 CC lib/vhost/vhost_blk.o 00:01:56.433 CC lib/vhost/rte_vhost_user.o 00:01:56.695 CC lib/iscsi/conn.o 00:01:56.695 CC lib/iscsi/init_grp.o 00:01:56.695 CC lib/iscsi/iscsi.o 00:01:56.695 CC lib/iscsi/param.o 00:01:56.695 CC lib/iscsi/md5.o 00:01:56.695 CC lib/iscsi/portal_grp.o 00:01:56.695 CC lib/iscsi/tgt_node.o 00:01:56.695 CC lib/iscsi/iscsi_subsystem.o 00:01:56.695 CC lib/iscsi/iscsi_rpc.o 00:01:56.695 CC lib/iscsi/task.o 00:01:56.695 LIB libspdk_ftl.a 00:01:56.695 SO libspdk_ftl.so.9.0 00:01:57.266 SYMLINK libspdk_ftl.so 00:01:57.266 LIB libspdk_nvmf.a 00:01:57.527 SO libspdk_nvmf.so.18.1 00:01:57.527 LIB libspdk_vhost.a 00:01:57.527 SO libspdk_vhost.so.8.0 00:01:57.527 SYMLINK libspdk_nvmf.so 00:01:57.787 SYMLINK libspdk_vhost.so 00:01:57.787 LIB libspdk_iscsi.a 00:01:57.787 SO libspdk_iscsi.so.8.0 00:01:58.049 SYMLINK libspdk_iscsi.so 00:01:58.621 CC module/env_dpdk/env_dpdk_rpc.o 00:01:58.621 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:58.621 CC module/scheduler/gscheduler/gscheduler.o 00:01:58.621 CC module/accel/iaa/accel_iaa.o 00:01:58.621 CC module/keyring/linux/keyring.o 00:01:58.621 CC module/accel/iaa/accel_iaa_rpc.o 00:01:58.621 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:58.621 CC module/keyring/linux/keyring_rpc.o 00:01:58.621 CC module/keyring/file/keyring.o 00:01:58.621 CC module/keyring/file/keyring_rpc.o 00:01:58.621 CC module/accel/error/accel_error.o 00:01:58.621 CC module/accel/error/accel_error_rpc.o 00:01:58.621 CC module/accel/dsa/accel_dsa.o 00:01:58.621 CC module/sock/posix/posix.o 00:01:58.621 CC module/accel/ioat/accel_ioat_rpc.o 00:01:58.621 CC module/blob/bdev/blob_bdev.o 00:01:58.621 CC module/accel/ioat/accel_ioat.o 00:01:58.621 LIB libspdk_env_dpdk_rpc.a 00:01:58.621 CC module/accel/dsa/accel_dsa_rpc.o 00:01:58.621 SO libspdk_env_dpdk_rpc.so.6.0 00:01:58.621 SYMLINK libspdk_env_dpdk_rpc.so 00:01:58.883 LIB libspdk_scheduler_gscheduler.a 00:01:58.883 LIB libspdk_keyring_linux.a 00:01:58.883 LIB libspdk_keyring_file.a 00:01:58.883 SO libspdk_scheduler_gscheduler.so.4.0 00:01:58.883 LIB libspdk_scheduler_dpdk_governor.a 00:01:58.883 LIB libspdk_scheduler_dynamic.a 00:01:58.883 SO libspdk_keyring_file.so.1.0 00:01:58.883 LIB libspdk_accel_iaa.a 00:01:58.883 SO libspdk_keyring_linux.so.1.0 00:01:58.883 SO libspdk_scheduler_dynamic.so.4.0 00:01:58.883 LIB libspdk_accel_error.a 00:01:58.883 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:58.883 LIB libspdk_accel_ioat.a 00:01:58.883 SO libspdk_accel_iaa.so.3.0 00:01:58.883 SYMLINK libspdk_scheduler_gscheduler.so 00:01:58.883 LIB libspdk_accel_dsa.a 00:01:58.883 SO libspdk_accel_error.so.2.0 00:01:58.883 SO libspdk_accel_ioat.so.6.0 00:01:58.883 SYMLINK libspdk_keyring_file.so 00:01:58.883 SYMLINK libspdk_keyring_linux.so 00:01:58.883 SYMLINK libspdk_scheduler_dynamic.so 00:01:58.883 LIB libspdk_blob_bdev.a 00:01:58.883 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:58.883 SO libspdk_accel_dsa.so.5.0 00:01:58.883 SYMLINK libspdk_accel_iaa.so 00:01:58.883 SYMLINK libspdk_accel_error.so 00:01:58.883 SO libspdk_blob_bdev.so.11.0 00:01:58.883 SYMLINK libspdk_accel_ioat.so 00:01:58.883 SYMLINK libspdk_accel_dsa.so 00:01:59.145 SYMLINK libspdk_blob_bdev.so 00:01:59.407 LIB libspdk_sock_posix.a 00:01:59.407 SO libspdk_sock_posix.so.6.0 00:01:59.407 SYMLINK libspdk_sock_posix.so 00:01:59.668 CC module/bdev/error/vbdev_error.o 00:01:59.668 CC module/bdev/error/vbdev_error_rpc.o 00:01:59.668 CC module/bdev/delay/vbdev_delay.o 00:01:59.668 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:59.668 CC module/bdev/passthru/vbdev_passthru.o 00:01:59.668 CC module/bdev/malloc/bdev_malloc.o 00:01:59.668 CC module/bdev/nvme/bdev_nvme.o 00:01:59.668 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:59.668 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:59.668 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:59.668 CC module/bdev/nvme/nvme_rpc.o 00:01:59.668 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:59.668 CC module/bdev/nvme/bdev_mdns_client.o 00:01:59.668 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:59.668 CC module/bdev/null/bdev_null.o 00:01:59.668 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:59.668 CC module/bdev/nvme/vbdev_opal.o 00:01:59.668 CC module/bdev/ftl/bdev_ftl.o 00:01:59.668 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:59.668 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:59.668 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:59.668 CC module/bdev/null/bdev_null_rpc.o 00:01:59.668 CC module/bdev/iscsi/bdev_iscsi.o 00:01:59.668 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:59.668 CC module/bdev/lvol/vbdev_lvol.o 00:01:59.668 CC module/bdev/gpt/gpt.o 00:01:59.668 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:59.668 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:59.668 CC module/bdev/gpt/vbdev_gpt.o 00:01:59.668 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:59.668 CC module/bdev/raid/bdev_raid.o 00:01:59.668 CC module/blobfs/bdev/blobfs_bdev.o 00:01:59.668 CC module/bdev/raid/bdev_raid_rpc.o 00:01:59.668 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:59.668 CC module/bdev/raid/bdev_raid_sb.o 00:01:59.668 CC module/bdev/aio/bdev_aio.o 00:01:59.668 CC module/bdev/raid/raid0.o 00:01:59.668 CC module/bdev/aio/bdev_aio_rpc.o 00:01:59.668 CC module/bdev/raid/raid1.o 00:01:59.668 CC module/bdev/split/vbdev_split.o 00:01:59.668 CC module/bdev/raid/concat.o 00:01:59.668 CC module/bdev/split/vbdev_split_rpc.o 00:01:59.930 LIB libspdk_blobfs_bdev.a 00:01:59.930 LIB libspdk_bdev_split.a 00:01:59.930 SO libspdk_blobfs_bdev.so.6.0 00:01:59.930 LIB libspdk_bdev_ftl.a 00:01:59.930 LIB libspdk_bdev_error.a 00:01:59.930 LIB libspdk_bdev_passthru.a 00:01:59.930 LIB libspdk_bdev_null.a 00:01:59.930 LIB libspdk_bdev_gpt.a 00:01:59.930 SO libspdk_bdev_split.so.6.0 00:01:59.930 SO libspdk_bdev_ftl.so.6.0 00:01:59.930 LIB libspdk_bdev_zone_block.a 00:01:59.930 SO libspdk_bdev_error.so.6.0 00:01:59.930 SYMLINK libspdk_blobfs_bdev.so 00:01:59.930 SO libspdk_bdev_passthru.so.6.0 00:01:59.930 SO libspdk_bdev_gpt.so.6.0 00:01:59.930 LIB libspdk_bdev_aio.a 00:01:59.930 SO libspdk_bdev_null.so.6.0 00:01:59.930 SO libspdk_bdev_zone_block.so.6.0 00:01:59.930 SO libspdk_bdev_aio.so.6.0 00:01:59.930 LIB libspdk_bdev_malloc.a 00:01:59.930 SYMLINK libspdk_bdev_split.so 00:01:59.930 LIB libspdk_bdev_delay.a 00:01:59.930 SYMLINK libspdk_bdev_error.so 00:01:59.930 SYMLINK libspdk_bdev_ftl.so 00:01:59.930 SYMLINK libspdk_bdev_gpt.so 00:01:59.930 LIB libspdk_bdev_iscsi.a 00:01:59.930 SYMLINK libspdk_bdev_passthru.so 00:01:59.930 SO libspdk_bdev_malloc.so.6.0 00:01:59.930 SYMLINK libspdk_bdev_null.so 00:01:59.930 SO libspdk_bdev_iscsi.so.6.0 00:01:59.930 SYMLINK libspdk_bdev_zone_block.so 00:01:59.930 SYMLINK libspdk_bdev_aio.so 00:01:59.930 SO libspdk_bdev_delay.so.6.0 00:01:59.930 LIB libspdk_bdev_virtio.a 00:01:59.930 SYMLINK libspdk_bdev_malloc.so 00:01:59.930 SYMLINK libspdk_bdev_iscsi.so 00:02:00.191 SYMLINK libspdk_bdev_delay.so 00:02:00.191 LIB libspdk_bdev_lvol.a 00:02:00.191 SO libspdk_bdev_virtio.so.6.0 00:02:00.191 SO libspdk_bdev_lvol.so.6.0 00:02:00.191 SYMLINK libspdk_bdev_virtio.so 00:02:00.191 SYMLINK libspdk_bdev_lvol.so 00:02:00.452 LIB libspdk_bdev_raid.a 00:02:00.452 SO libspdk_bdev_raid.so.6.0 00:02:00.712 SYMLINK libspdk_bdev_raid.so 00:02:01.653 LIB libspdk_bdev_nvme.a 00:02:01.653 SO libspdk_bdev_nvme.so.7.0 00:02:01.653 SYMLINK libspdk_bdev_nvme.so 00:02:02.226 CC module/event/subsystems/iobuf/iobuf.o 00:02:02.226 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:02.226 CC module/event/subsystems/keyring/keyring.o 00:02:02.226 CC module/event/subsystems/vmd/vmd.o 00:02:02.226 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:02.226 CC module/event/subsystems/scheduler/scheduler.o 00:02:02.226 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:02.226 CC module/event/subsystems/sock/sock.o 00:02:02.487 LIB libspdk_event_scheduler.a 00:02:02.487 LIB libspdk_event_keyring.a 00:02:02.487 LIB libspdk_event_sock.a 00:02:02.487 LIB libspdk_event_iobuf.a 00:02:02.487 LIB libspdk_event_vmd.a 00:02:02.487 LIB libspdk_event_vhost_blk.a 00:02:02.487 SO libspdk_event_scheduler.so.4.0 00:02:02.487 SO libspdk_event_keyring.so.1.0 00:02:02.487 SO libspdk_event_sock.so.5.0 00:02:02.487 SO libspdk_event_iobuf.so.3.0 00:02:02.487 SO libspdk_event_vmd.so.6.0 00:02:02.487 SO libspdk_event_vhost_blk.so.3.0 00:02:02.487 SYMLINK libspdk_event_scheduler.so 00:02:02.487 SYMLINK libspdk_event_keyring.so 00:02:02.487 SYMLINK libspdk_event_sock.so 00:02:02.487 SYMLINK libspdk_event_vhost_blk.so 00:02:02.487 SYMLINK libspdk_event_iobuf.so 00:02:02.487 SYMLINK libspdk_event_vmd.so 00:02:02.747 CC module/event/subsystems/accel/accel.o 00:02:03.008 LIB libspdk_event_accel.a 00:02:03.008 SO libspdk_event_accel.so.6.0 00:02:03.008 SYMLINK libspdk_event_accel.so 00:02:03.582 CC module/event/subsystems/bdev/bdev.o 00:02:03.582 LIB libspdk_event_bdev.a 00:02:03.582 SO libspdk_event_bdev.so.6.0 00:02:03.582 SYMLINK libspdk_event_bdev.so 00:02:04.157 CC module/event/subsystems/nbd/nbd.o 00:02:04.157 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:04.157 CC module/event/subsystems/ublk/ublk.o 00:02:04.157 CC module/event/subsystems/scsi/scsi.o 00:02:04.157 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:04.157 LIB libspdk_event_nbd.a 00:02:04.157 LIB libspdk_event_ublk.a 00:02:04.157 LIB libspdk_event_scsi.a 00:02:04.157 SO libspdk_event_nbd.so.6.0 00:02:04.157 SO libspdk_event_ublk.so.3.0 00:02:04.157 SO libspdk_event_scsi.so.6.0 00:02:04.157 LIB libspdk_event_nvmf.a 00:02:04.418 SYMLINK libspdk_event_nbd.so 00:02:04.418 SYMLINK libspdk_event_ublk.so 00:02:04.418 SYMLINK libspdk_event_scsi.so 00:02:04.418 SO libspdk_event_nvmf.so.6.0 00:02:04.418 SYMLINK libspdk_event_nvmf.so 00:02:04.680 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:04.680 CC module/event/subsystems/iscsi/iscsi.o 00:02:04.942 LIB libspdk_event_vhost_scsi.a 00:02:04.942 SO libspdk_event_vhost_scsi.so.3.0 00:02:04.942 LIB libspdk_event_iscsi.a 00:02:04.942 SO libspdk_event_iscsi.so.6.0 00:02:04.942 SYMLINK libspdk_event_vhost_scsi.so 00:02:04.942 SYMLINK libspdk_event_iscsi.so 00:02:05.207 SO libspdk.so.6.0 00:02:05.207 SYMLINK libspdk.so 00:02:05.470 CC app/spdk_nvme_identify/identify.o 00:02:05.470 CXX app/trace/trace.o 00:02:05.470 CC app/spdk_lspci/spdk_lspci.o 00:02:05.470 CC app/spdk_top/spdk_top.o 00:02:05.470 CC app/trace_record/trace_record.o 00:02:05.470 CC app/spdk_nvme_perf/perf.o 00:02:05.470 CC test/rpc_client/rpc_client_test.o 00:02:05.470 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:05.743 CC app/vhost/vhost.o 00:02:05.743 CC app/spdk_nvme_discover/discovery_aer.o 00:02:05.743 TEST_HEADER include/spdk/accel.h 00:02:05.743 TEST_HEADER include/spdk/assert.h 00:02:05.743 TEST_HEADER include/spdk/barrier.h 00:02:05.743 TEST_HEADER include/spdk/base64.h 00:02:05.743 TEST_HEADER include/spdk/bdev.h 00:02:05.743 TEST_HEADER include/spdk/bdev_module.h 00:02:05.743 TEST_HEADER include/spdk/bit_array.h 00:02:05.743 TEST_HEADER include/spdk/bit_pool.h 00:02:05.743 CC app/spdk_dd/spdk_dd.o 00:02:05.743 TEST_HEADER include/spdk/blob_bdev.h 00:02:05.743 CC app/iscsi_tgt/iscsi_tgt.o 00:02:05.743 TEST_HEADER include/spdk/bdev_zone.h 00:02:05.743 TEST_HEADER include/spdk/accel_module.h 00:02:05.743 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:05.743 CC app/nvmf_tgt/nvmf_main.o 00:02:05.743 TEST_HEADER include/spdk/blobfs.h 00:02:05.743 TEST_HEADER include/spdk/conf.h 00:02:05.743 TEST_HEADER include/spdk/cpuset.h 00:02:05.743 TEST_HEADER include/spdk/blob.h 00:02:05.743 TEST_HEADER include/spdk/crc16.h 00:02:05.743 TEST_HEADER include/spdk/config.h 00:02:05.743 TEST_HEADER include/spdk/crc32.h 00:02:05.743 TEST_HEADER include/spdk/crc64.h 00:02:05.743 TEST_HEADER include/spdk/dif.h 00:02:05.743 TEST_HEADER include/spdk/dma.h 00:02:05.743 TEST_HEADER include/spdk/endian.h 00:02:05.743 TEST_HEADER include/spdk/env.h 00:02:05.743 TEST_HEADER include/spdk/event.h 00:02:05.743 TEST_HEADER include/spdk/env_dpdk.h 00:02:05.743 TEST_HEADER include/spdk/fd_group.h 00:02:05.743 CC app/spdk_tgt/spdk_tgt.o 00:02:05.743 TEST_HEADER include/spdk/fd.h 00:02:05.743 TEST_HEADER include/spdk/file.h 00:02:05.743 TEST_HEADER include/spdk/histogram_data.h 00:02:05.743 TEST_HEADER include/spdk/idxd.h 00:02:05.743 TEST_HEADER include/spdk/idxd_spec.h 00:02:05.743 TEST_HEADER include/spdk/ftl.h 00:02:05.743 TEST_HEADER include/spdk/hexlify.h 00:02:05.743 TEST_HEADER include/spdk/ioat.h 00:02:05.743 TEST_HEADER include/spdk/gpt_spec.h 00:02:05.743 TEST_HEADER include/spdk/ioat_spec.h 00:02:05.743 TEST_HEADER include/spdk/iscsi_spec.h 00:02:05.743 TEST_HEADER include/spdk/json.h 00:02:05.743 TEST_HEADER include/spdk/jsonrpc.h 00:02:05.743 TEST_HEADER include/spdk/keyring.h 00:02:05.743 TEST_HEADER include/spdk/init.h 00:02:05.743 TEST_HEADER include/spdk/keyring_module.h 00:02:05.743 TEST_HEADER include/spdk/likely.h 00:02:05.743 TEST_HEADER include/spdk/log.h 00:02:05.743 TEST_HEADER include/spdk/lvol.h 00:02:05.743 TEST_HEADER include/spdk/memory.h 00:02:05.743 TEST_HEADER include/spdk/nbd.h 00:02:05.743 TEST_HEADER include/spdk/notify.h 00:02:05.743 TEST_HEADER include/spdk/nvme.h 00:02:05.743 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:05.743 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:05.743 TEST_HEADER include/spdk/nvme_intel.h 00:02:05.743 TEST_HEADER include/spdk/nvme_spec.h 00:02:05.743 TEST_HEADER include/spdk/nvme_zns.h 00:02:05.743 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:05.743 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:05.743 TEST_HEADER include/spdk/nvmf.h 00:02:05.743 TEST_HEADER include/spdk/mmio.h 00:02:05.743 TEST_HEADER include/spdk/nvmf_spec.h 00:02:05.743 TEST_HEADER include/spdk/nvmf_transport.h 00:02:05.743 TEST_HEADER include/spdk/opal_spec.h 00:02:05.743 TEST_HEADER include/spdk/pipe.h 00:02:05.743 TEST_HEADER include/spdk/queue.h 00:02:05.743 TEST_HEADER include/spdk/rpc.h 00:02:05.743 TEST_HEADER include/spdk/reduce.h 00:02:05.743 TEST_HEADER include/spdk/scsi.h 00:02:05.743 TEST_HEADER include/spdk/scheduler.h 00:02:05.743 TEST_HEADER include/spdk/scsi_spec.h 00:02:05.743 TEST_HEADER include/spdk/sock.h 00:02:05.743 TEST_HEADER include/spdk/string.h 00:02:05.743 TEST_HEADER include/spdk/opal.h 00:02:05.743 TEST_HEADER include/spdk/thread.h 00:02:05.743 TEST_HEADER include/spdk/stdinc.h 00:02:05.743 TEST_HEADER include/spdk/trace.h 00:02:05.743 CC test/nvme/err_injection/err_injection.o 00:02:05.743 TEST_HEADER include/spdk/tree.h 00:02:05.743 TEST_HEADER include/spdk/pci_ids.h 00:02:05.743 TEST_HEADER include/spdk/util.h 00:02:05.743 TEST_HEADER include/spdk/uuid.h 00:02:05.743 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:05.743 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:05.743 TEST_HEADER include/spdk/vhost.h 00:02:05.743 TEST_HEADER include/spdk/vmd.h 00:02:05.743 TEST_HEADER include/spdk/xor.h 00:02:05.743 CC test/app/jsoncat/jsoncat.o 00:02:05.743 TEST_HEADER include/spdk/trace_parser.h 00:02:05.743 TEST_HEADER include/spdk/zipf.h 00:02:05.743 CXX test/cpp_headers/accel.o 00:02:05.743 TEST_HEADER include/spdk/ublk.h 00:02:05.743 CXX test/cpp_headers/accel_module.o 00:02:05.743 TEST_HEADER include/spdk/version.h 00:02:05.743 CXX test/cpp_headers/barrier.o 00:02:05.743 CXX test/cpp_headers/assert.o 00:02:05.743 CC test/nvme/sgl/sgl.o 00:02:05.743 CXX test/cpp_headers/bdev.o 00:02:05.743 CXX test/cpp_headers/bdev_zone.o 00:02:05.743 CXX test/cpp_headers/bdev_module.o 00:02:05.743 CXX test/cpp_headers/blob_bdev.o 00:02:05.743 CXX test/cpp_headers/base64.o 00:02:05.743 CXX test/cpp_headers/blob.o 00:02:05.743 CXX test/cpp_headers/blobfs.o 00:02:05.743 CXX test/cpp_headers/cpuset.o 00:02:05.743 CXX test/cpp_headers/config.o 00:02:05.743 CC test/app/stub/stub.o 00:02:05.743 CC test/event/event_perf/event_perf.o 00:02:05.743 CXX test/cpp_headers/bit_array.o 00:02:05.743 CXX test/cpp_headers/crc32.o 00:02:05.743 CXX test/cpp_headers/crc64.o 00:02:05.743 CXX test/cpp_headers/blobfs_bdev.o 00:02:05.743 CC test/nvme/reserve/reserve.o 00:02:05.743 CC examples/util/zipf/zipf.o 00:02:05.743 CXX test/cpp_headers/dma.o 00:02:05.743 CC test/nvme/e2edp/nvme_dp.o 00:02:05.743 CXX test/cpp_headers/endian.o 00:02:05.743 CXX test/cpp_headers/env_dpdk.o 00:02:05.743 CC test/blobfs/mkfs/mkfs.o 00:02:05.743 CXX test/cpp_headers/crc16.o 00:02:05.743 CXX test/cpp_headers/bit_pool.o 00:02:05.743 CXX test/cpp_headers/event.o 00:02:05.743 CC examples/nvme/hello_world/hello_world.o 00:02:05.743 CC examples/accel/perf/accel_perf.o 00:02:05.743 CXX test/cpp_headers/fd_group.o 00:02:05.743 CXX test/cpp_headers/conf.o 00:02:05.743 CXX test/cpp_headers/env.o 00:02:05.743 CC test/app/bdev_svc/bdev_svc.o 00:02:05.743 CC test/nvme/startup/startup.o 00:02:05.743 CC examples/vmd/led/led.o 00:02:05.743 CXX test/cpp_headers/ftl.o 00:02:05.743 CXX test/cpp_headers/dif.o 00:02:05.743 CC test/nvme/aer/aer.o 00:02:05.743 CC test/event/reactor/reactor.o 00:02:05.743 CXX test/cpp_headers/hexlify.o 00:02:05.743 CC test/event/reactor_perf/reactor_perf.o 00:02:05.743 CXX test/cpp_headers/idxd.o 00:02:05.743 CC app/fio/nvme/fio_plugin.o 00:02:05.743 CXX test/cpp_headers/file.o 00:02:05.743 CXX test/cpp_headers/fd.o 00:02:05.743 CC examples/nvme/reconnect/reconnect.o 00:02:05.743 CC test/nvme/boot_partition/boot_partition.o 00:02:05.743 CXX test/cpp_headers/init.o 00:02:05.743 CC test/accel/dif/dif.o 00:02:05.743 CXX test/cpp_headers/ioat.o 00:02:05.743 CXX test/cpp_headers/gpt_spec.o 00:02:05.743 CC test/env/pci/pci_ut.o 00:02:05.744 CXX test/cpp_headers/histogram_data.o 00:02:05.744 CC test/env/vtophys/vtophys.o 00:02:05.744 CC examples/nvme/abort/abort.o 00:02:05.744 CXX test/cpp_headers/idxd_spec.o 00:02:05.744 CXX test/cpp_headers/iscsi_spec.o 00:02:05.744 CXX test/cpp_headers/jsonrpc.o 00:02:05.744 CXX test/cpp_headers/ioat_spec.o 00:02:05.744 CXX test/cpp_headers/likely.o 00:02:05.744 CXX test/cpp_headers/log.o 00:02:05.744 CXX test/cpp_headers/keyring.o 00:02:05.744 CC test/app/histogram_perf/histogram_perf.o 00:02:05.744 LINK spdk_lspci 00:02:05.744 CXX test/cpp_headers/json.o 00:02:05.744 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:05.744 CXX test/cpp_headers/memory.o 00:02:05.744 CXX test/cpp_headers/keyring_module.o 00:02:05.744 CC test/event/scheduler/scheduler.o 00:02:05.744 CC test/nvme/compliance/nvme_compliance.o 00:02:05.744 CC examples/vmd/lsvmd/lsvmd.o 00:02:05.744 CXX test/cpp_headers/lvol.o 00:02:05.744 CXX test/cpp_headers/nbd.o 00:02:05.744 CXX test/cpp_headers/notify.o 00:02:05.744 CXX test/cpp_headers/nvme_ocssd.o 00:02:05.744 CXX test/cpp_headers/mmio.o 00:02:05.744 CC test/nvme/cuse/cuse.o 00:02:05.744 CXX test/cpp_headers/nvmf_cmd.o 00:02:05.744 CXX test/cpp_headers/nvme.o 00:02:05.744 CXX test/cpp_headers/nvme_intel.o 00:02:05.744 CC examples/bdev/bdevperf/bdevperf.o 00:02:05.744 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:05.744 CXX test/cpp_headers/nvmf.o 00:02:05.744 CXX test/cpp_headers/nvmf_spec.o 00:02:05.744 CXX test/cpp_headers/opal.o 00:02:05.744 CXX test/cpp_headers/nvmf_transport.o 00:02:05.744 CXX test/cpp_headers/nvme_zns.o 00:02:05.744 CXX test/cpp_headers/nvme_spec.o 00:02:05.744 CXX test/cpp_headers/pci_ids.o 00:02:05.744 CXX test/cpp_headers/pipe.o 00:02:05.744 CXX test/cpp_headers/queue.o 00:02:05.744 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:05.744 CXX test/cpp_headers/reduce.o 00:02:06.017 CXX test/cpp_headers/rpc.o 00:02:06.017 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:06.017 CXX test/cpp_headers/opal_spec.o 00:02:06.017 CXX test/cpp_headers/scsi.o 00:02:06.017 CC test/nvme/simple_copy/simple_copy.o 00:02:06.017 CC examples/nvme/arbitration/arbitration.o 00:02:06.017 CC test/nvme/fdp/fdp.o 00:02:06.017 CXX test/cpp_headers/scheduler.o 00:02:06.017 CC test/event/app_repeat/app_repeat.o 00:02:06.017 CC examples/nvme/hotplug/hotplug.o 00:02:06.017 CC test/thread/poller_perf/poller_perf.o 00:02:06.017 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.017 CC examples/thread/thread/thread_ex.o 00:02:06.017 CC test/nvme/reset/reset.o 00:02:06.017 LINK interrupt_tgt 00:02:06.017 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:06.017 LINK vhost 00:02:06.017 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:06.017 CC test/nvme/connect_stress/connect_stress.o 00:02:06.017 CC examples/ioat/verify/verify.o 00:02:06.017 CC test/bdev/bdevio/bdevio.o 00:02:06.017 CC examples/idxd/perf/perf.o 00:02:06.017 CXX test/cpp_headers/scsi_spec.o 00:02:06.017 CC examples/ioat/perf/perf.o 00:02:06.017 CC examples/bdev/hello_world/hello_bdev.o 00:02:06.017 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.017 LINK spdk_trace_record 00:02:06.017 CC examples/nvmf/nvmf/nvmf.o 00:02:06.017 CC test/nvme/overhead/overhead.o 00:02:06.017 CC examples/blob/hello_world/hello_blob.o 00:02:06.017 CXX test/cpp_headers/sock.o 00:02:06.017 LINK err_injection 00:02:06.017 LINK spdk_nvme_discover 00:02:06.284 LINK zipf 00:02:06.284 CC app/fio/bdev/fio_plugin.o 00:02:06.284 CC examples/blob/cli/blobcli.o 00:02:06.284 LINK led 00:02:06.284 LINK histogram_perf 00:02:06.284 LINK vtophys 00:02:06.284 LINK spdk_dd 00:02:06.284 LINK reactor 00:02:06.284 LINK jsoncat 00:02:06.284 LINK stub 00:02:06.284 LINK reserve 00:02:06.284 CC examples/sock/hello_world/hello_sock.o 00:02:06.284 LINK nvmf_tgt 00:02:06.284 LINK mkfs 00:02:06.284 LINK hello_world 00:02:06.284 LINK boot_partition 00:02:06.284 LINK sgl 00:02:06.284 CC test/env/memory/memory_ut.o 00:02:06.284 LINK spdk_tgt 00:02:06.284 LINK scheduler 00:02:06.284 CC test/env/mem_callbacks/mem_callbacks.o 00:02:06.284 LINK nvme_dp 00:02:06.284 CC test/lvol/esnap/esnap.o 00:02:06.284 CXX test/cpp_headers/stdinc.o 00:02:06.284 CXX test/cpp_headers/string.o 00:02:06.284 CXX test/cpp_headers/thread.o 00:02:06.284 CXX test/cpp_headers/trace.o 00:02:06.284 CXX test/cpp_headers/trace_parser.o 00:02:06.284 CXX test/cpp_headers/tree.o 00:02:06.284 CXX test/cpp_headers/ublk.o 00:02:06.284 CXX test/cpp_headers/util.o 00:02:06.284 LINK aer 00:02:06.284 CXX test/cpp_headers/version.o 00:02:06.284 CXX test/cpp_headers/vfio_user_pci.o 00:02:06.284 CXX test/cpp_headers/vfio_user_spec.o 00:02:06.284 CXX test/cpp_headers/vmd.o 00:02:06.284 CC test/dma/test_dma/test_dma.o 00:02:06.284 CXX test/cpp_headers/xor.o 00:02:06.284 CXX test/cpp_headers/uuid.o 00:02:06.284 CXX test/cpp_headers/vhost.o 00:02:06.284 CXX test/cpp_headers/zipf.o 00:02:06.284 LINK spdk_trace 00:02:06.544 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:06.544 LINK simple_copy 00:02:06.544 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:06.544 LINK reconnect 00:02:06.544 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:06.544 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:06.544 LINK hello_blob 00:02:06.544 LINK abort 00:02:06.544 LINK dif 00:02:06.544 LINK arbitration 00:02:06.544 LINK accel_perf 00:02:06.544 LINK iscsi_tgt 00:02:06.544 LINK env_dpdk_post_init 00:02:06.544 LINK hello_sock 00:02:06.544 LINK spdk_nvme 00:02:06.544 LINK rpc_client_test 00:02:06.805 LINK lsvmd 00:02:06.805 LINK event_perf 00:02:06.805 LINK spdk_nvme_perf 00:02:06.805 LINK reactor_perf 00:02:06.805 LINK poller_perf 00:02:06.805 LINK bdev_svc 00:02:06.805 LINK app_repeat 00:02:06.805 LINK nvme_fuzz 00:02:06.805 LINK spdk_nvme_identify 00:02:06.805 LINK cmb_copy 00:02:06.805 LINK startup 00:02:06.805 LINK doorbell_aers 00:02:06.805 LINK blobcli 00:02:06.805 LINK ioat_perf 00:02:06.805 LINK connect_stress 00:02:06.805 LINK pmr_persistence 00:02:06.805 LINK spdk_top 00:02:06.805 LINK fused_ordering 00:02:06.805 LINK nvme_compliance 00:02:06.805 LINK overhead 00:02:06.805 LINK test_dma 00:02:06.805 LINK fdp 00:02:06.805 LINK vhost_fuzz 00:02:06.805 LINK hello_bdev 00:02:06.805 LINK idxd_perf 00:02:07.066 LINK verify 00:02:07.066 LINK thread 00:02:07.066 LINK hotplug 00:02:07.066 LINK nvmf 00:02:07.066 LINK mem_callbacks 00:02:07.066 LINK bdevperf 00:02:07.066 LINK reset 00:02:07.066 LINK pci_ut 00:02:07.066 LINK bdevio 00:02:07.066 LINK spdk_bdev 00:02:07.066 LINK nvme_manage 00:02:07.639 LINK memory_ut 00:02:07.639 LINK cuse 00:02:07.936 LINK iscsi_fuzz 00:02:10.483 LINK esnap 00:02:10.745 00:02:10.745 real 0m49.184s 00:02:10.745 user 6m27.904s 00:02:10.745 sys 4m27.933s 00:02:10.745 08:40:33 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:10.745 08:40:33 make -- common/autotest_common.sh@10 -- $ set +x 00:02:10.745 ************************************ 00:02:10.745 END TEST make 00:02:10.745 ************************************ 00:02:10.745 08:40:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:10.745 08:40:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:10.745 08:40:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:10.745 08:40:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.745 08:40:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:10.745 08:40:33 -- pm/common@44 -- $ pid=2232647 00:02:10.745 08:40:33 -- pm/common@50 -- $ kill -TERM 2232647 00:02:10.745 08:40:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.745 08:40:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:10.745 08:40:33 -- pm/common@44 -- $ pid=2232648 00:02:10.745 08:40:33 -- pm/common@50 -- $ kill -TERM 2232648 00:02:10.745 08:40:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.745 08:40:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:10.745 08:40:33 -- pm/common@44 -- $ pid=2232650 00:02:10.745 08:40:33 -- pm/common@50 -- $ kill -TERM 2232650 00:02:10.745 08:40:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.745 08:40:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:10.745 08:40:33 -- pm/common@44 -- $ pid=2232675 00:02:10.745 08:40:33 -- pm/common@50 -- $ sudo -E kill -TERM 2232675 00:02:10.745 08:40:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:10.745 08:40:33 -- nvmf/common.sh@7 -- # uname -s 00:02:10.745 08:40:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:10.745 08:40:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:10.745 08:40:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:10.745 08:40:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:10.745 08:40:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:10.745 08:40:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:10.745 08:40:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:10.745 08:40:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:10.745 08:40:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:10.745 08:40:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:10.745 08:40:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:10.745 08:40:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:10.745 08:40:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:10.745 08:40:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:10.745 08:40:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:10.745 08:40:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:10.745 08:40:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:10.745 08:40:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:10.745 08:40:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.745 08:40:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.745 08:40:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.745 08:40:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.745 08:40:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.745 08:40:33 -- paths/export.sh@5 -- # export PATH 00:02:10.745 08:40:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.745 08:40:33 -- nvmf/common.sh@47 -- # : 0 00:02:10.745 08:40:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:10.745 08:40:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:10.745 08:40:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:10.745 08:40:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:10.745 08:40:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:10.745 08:40:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:10.745 08:40:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:10.745 08:40:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:10.745 08:40:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:10.745 08:40:33 -- spdk/autotest.sh@32 -- # uname -s 00:02:10.745 08:40:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:10.745 08:40:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:10.745 08:40:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:10.745 08:40:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:10.746 08:40:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:10.746 08:40:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:10.746 08:40:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:10.746 08:40:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:10.746 08:40:33 -- spdk/autotest.sh@48 -- # udevadm_pid=2294848 00:02:10.746 08:40:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:10.746 08:40:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:10.746 08:40:33 -- pm/common@17 -- # local monitor 00:02:10.746 08:40:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.746 08:40:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.746 08:40:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.746 08:40:33 -- pm/common@21 -- # date +%s 00:02:10.746 08:40:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.746 08:40:33 -- pm/common@25 -- # sleep 1 00:02:10.746 08:40:33 -- pm/common@21 -- # date +%s 00:02:10.746 08:40:33 -- pm/common@21 -- # date +%s 00:02:10.746 08:40:33 -- pm/common@21 -- # date +%s 00:02:10.746 08:40:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717915233 00:02:10.746 08:40:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717915233 00:02:10.746 08:40:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717915233 00:02:10.746 08:40:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717915233 00:02:11.007 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717915233_collect-vmstat.pm.log 00:02:11.007 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717915233_collect-cpu-load.pm.log 00:02:11.007 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717915233_collect-cpu-temp.pm.log 00:02:11.007 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717915233_collect-bmc-pm.bmc.pm.log 00:02:11.951 08:40:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:11.951 08:40:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:11.951 08:40:34 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:11.951 08:40:34 -- common/autotest_common.sh@10 -- # set +x 00:02:11.951 08:40:34 -- spdk/autotest.sh@59 -- # create_test_list 00:02:11.951 08:40:34 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:11.951 08:40:34 -- common/autotest_common.sh@10 -- # set +x 00:02:11.951 08:40:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:11.951 08:40:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.951 08:40:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.951 08:40:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:11.951 08:40:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.951 08:40:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:11.951 08:40:34 -- common/autotest_common.sh@1454 -- # uname 00:02:11.951 08:40:34 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:11.951 08:40:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:11.951 08:40:34 -- common/autotest_common.sh@1474 -- # uname 00:02:11.951 08:40:34 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:11.951 08:40:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:11.951 08:40:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:11.951 08:40:34 -- spdk/autotest.sh@72 -- # hash lcov 00:02:11.951 08:40:34 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:11.951 08:40:34 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:11.951 --rc lcov_branch_coverage=1 00:02:11.951 --rc lcov_function_coverage=1 00:02:11.951 --rc genhtml_branch_coverage=1 00:02:11.951 --rc genhtml_function_coverage=1 00:02:11.951 --rc genhtml_legend=1 00:02:11.951 --rc geninfo_all_blocks=1 00:02:11.951 ' 00:02:11.951 08:40:34 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:11.951 --rc lcov_branch_coverage=1 00:02:11.951 --rc lcov_function_coverage=1 00:02:11.951 --rc genhtml_branch_coverage=1 00:02:11.951 --rc genhtml_function_coverage=1 00:02:11.951 --rc genhtml_legend=1 00:02:11.951 --rc geninfo_all_blocks=1 00:02:11.951 ' 00:02:11.951 08:40:34 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:11.951 --rc lcov_branch_coverage=1 00:02:11.951 --rc lcov_function_coverage=1 00:02:11.951 --rc genhtml_branch_coverage=1 00:02:11.952 --rc genhtml_function_coverage=1 00:02:11.952 --rc genhtml_legend=1 00:02:11.952 --rc geninfo_all_blocks=1 00:02:11.952 --no-external' 00:02:11.952 08:40:34 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:11.952 --rc lcov_branch_coverage=1 00:02:11.952 --rc lcov_function_coverage=1 00:02:11.952 --rc genhtml_branch_coverage=1 00:02:11.952 --rc genhtml_function_coverage=1 00:02:11.952 --rc genhtml_legend=1 00:02:11.952 --rc geninfo_all_blocks=1 00:02:11.952 --no-external' 00:02:11.952 08:40:34 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:11.952 lcov: LCOV version 1.14 00:02:11.952 08:40:34 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:24.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:24.192 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:39.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:39.114 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:39.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:39.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:39.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:40.059 08:41:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:40.059 08:41:02 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:40.059 08:41:02 -- common/autotest_common.sh@10 -- # set +x 00:02:40.059 08:41:02 -- spdk/autotest.sh@91 -- # rm -f 00:02:40.320 08:41:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.869 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:42.869 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:42.869 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:42.869 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:42.869 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:43.130 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:43.130 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:43.391 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:43.652 08:41:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:43.652 08:41:05 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:43.652 08:41:05 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:43.652 08:41:05 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:43.652 08:41:05 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:43.652 08:41:05 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:43.652 08:41:05 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:43.652 08:41:05 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.652 08:41:05 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:43.652 08:41:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:43.652 08:41:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:43.652 08:41:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:43.652 08:41:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:43.652 08:41:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:43.652 08:41:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:43.652 No valid GPT data, bailing 00:02:43.652 08:41:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:43.652 08:41:06 -- scripts/common.sh@391 -- # pt= 00:02:43.652 08:41:06 -- scripts/common.sh@392 -- # return 1 00:02:43.652 08:41:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:43.652 1+0 records in 00:02:43.652 1+0 records out 00:02:43.652 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405703 s, 258 MB/s 00:02:43.652 08:41:06 -- spdk/autotest.sh@118 -- # sync 00:02:43.652 08:41:06 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:43.652 08:41:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:43.652 08:41:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:51.873 08:41:13 -- spdk/autotest.sh@124 -- # uname -s 00:02:51.873 08:41:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:51.873 08:41:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:51.873 08:41:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:51.873 08:41:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:51.873 08:41:13 -- common/autotest_common.sh@10 -- # set +x 00:02:51.873 ************************************ 00:02:51.873 START TEST setup.sh 00:02:51.873 ************************************ 00:02:51.873 08:41:14 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:51.873 * Looking for test storage... 00:02:51.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.873 08:41:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:51.873 08:41:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:51.873 08:41:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:51.873 08:41:14 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:51.873 08:41:14 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:51.873 08:41:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:51.873 ************************************ 00:02:51.873 START TEST acl 00:02:51.873 ************************************ 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:51.873 * Looking for test storage... 00:02:51.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.873 08:41:14 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:51.873 08:41:14 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:51.873 08:41:14 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:51.873 08:41:14 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:51.873 08:41:14 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:51.873 08:41:14 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:51.873 08:41:14 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:51.873 08:41:14 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.873 08:41:14 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.081 08:41:18 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:56.081 08:41:18 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:56.081 08:41:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.081 08:41:18 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:56.081 08:41:18 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.081 08:41:18 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:59.386 Hugepages 00:02:59.386 node hugesize free / total 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 00:02:59.386 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:59.386 08:41:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:59.386 08:41:21 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:59.386 08:41:21 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:59.386 08:41:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.386 ************************************ 00:02:59.386 START TEST denied 00:02:59.386 ************************************ 00:02:59.386 08:41:21 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:02:59.386 08:41:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:59.386 08:41:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:59.386 08:41:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:59.386 08:41:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.386 08:41:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.592 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.592 08:41:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.802 00:03:07.802 real 0m8.386s 00:03:07.802 user 0m2.834s 00:03:07.802 sys 0m4.829s 00:03:07.802 08:41:30 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:07.802 08:41:30 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:07.802 ************************************ 00:03:07.802 END TEST denied 00:03:07.802 ************************************ 00:03:07.802 08:41:30 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:07.802 08:41:30 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:07.802 08:41:30 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:07.802 08:41:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.802 ************************************ 00:03:07.802 START TEST allowed 00:03:07.802 ************************************ 00:03:07.802 08:41:30 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:07.802 08:41:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:07.802 08:41:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:07.802 08:41:30 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:07.802 08:41:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.802 08:41:30 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.089 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:13.089 08:41:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:13.090 08:41:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:13.090 08:41:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:13.090 08:41:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.090 08:41:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.338 00:03:17.338 real 0m9.190s 00:03:17.338 user 0m2.571s 00:03:17.338 sys 0m4.853s 00:03:17.338 08:41:39 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:17.338 08:41:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:17.338 ************************************ 00:03:17.338 END TEST allowed 00:03:17.338 ************************************ 00:03:17.338 00:03:17.338 real 0m25.179s 00:03:17.338 user 0m8.240s 00:03:17.338 sys 0m14.646s 00:03:17.338 08:41:39 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:17.338 08:41:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:17.338 ************************************ 00:03:17.338 END TEST acl 00:03:17.338 ************************************ 00:03:17.338 08:41:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.338 08:41:39 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:17.338 08:41:39 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:17.339 08:41:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:17.339 ************************************ 00:03:17.339 START TEST hugepages 00:03:17.339 ************************************ 00:03:17.339 08:41:39 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.339 * Looking for test storage... 00:03:17.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 103168992 kB' 'MemAvailable: 106432624 kB' 'Buffers: 2704 kB' 'Cached: 14346604 kB' 'SwapCached: 0 kB' 'Active: 11393816 kB' 'Inactive: 3514596 kB' 'Active(anon): 10981788 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562448 kB' 'Mapped: 210844 kB' 'Shmem: 10422684 kB' 'KReclaimable: 322680 kB' 'Slab: 1188764 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 866084 kB' 'KernelStack: 27072 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 12471308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235172 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:17.341 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:17.342 08:41:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:17.342 08:41:39 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:17.342 08:41:39 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:17.342 08:41:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.342 ************************************ 00:03:17.342 START TEST default_setup 00:03:17.342 ************************************ 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.342 08:41:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.654 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:20.654 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105334772 kB' 'MemAvailable: 108598404 kB' 'Buffers: 2704 kB' 'Cached: 14346728 kB' 'SwapCached: 0 kB' 'Active: 11409064 kB' 'Inactive: 3514596 kB' 'Active(anon): 10997036 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577592 kB' 'Mapped: 211432 kB' 'Shmem: 10422808 kB' 'KReclaimable: 322680 kB' 'Slab: 1186368 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863688 kB' 'KernelStack: 27104 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12486312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.654 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.655 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105334736 kB' 'MemAvailable: 108598368 kB' 'Buffers: 2704 kB' 'Cached: 14346728 kB' 'SwapCached: 0 kB' 'Active: 11408868 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996840 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577456 kB' 'Mapped: 211136 kB' 'Shmem: 10422808 kB' 'KReclaimable: 322680 kB' 'Slab: 1186368 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863688 kB' 'KernelStack: 27152 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12486328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.656 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.922 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105335052 kB' 'MemAvailable: 108598684 kB' 'Buffers: 2704 kB' 'Cached: 14346748 kB' 'SwapCached: 0 kB' 'Active: 11408856 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996828 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577428 kB' 'Mapped: 211136 kB' 'Shmem: 10422828 kB' 'KReclaimable: 322680 kB' 'Slab: 1186440 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863760 kB' 'KernelStack: 27152 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12486352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.923 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.924 nr_hugepages=1024 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.924 resv_hugepages=0 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.924 surplus_hugepages=0 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.924 anon_hugepages=0 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105335436 kB' 'MemAvailable: 108599068 kB' 'Buffers: 2704 kB' 'Cached: 14346764 kB' 'SwapCached: 0 kB' 'Active: 11408760 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996732 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577300 kB' 'Mapped: 211136 kB' 'Shmem: 10422844 kB' 'KReclaimable: 322680 kB' 'Slab: 1186440 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863760 kB' 'KernelStack: 27136 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12486372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.924 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.925 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50528164 kB' 'MemUsed: 15130844 kB' 'SwapCached: 0 kB' 'Active: 7006556 kB' 'Inactive: 3323512 kB' 'Active(anon): 6857316 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10114520 kB' 'Mapped: 56328 kB' 'AnonPages: 218788 kB' 'Shmem: 6641768 kB' 'KernelStack: 12296 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190104 kB' 'Slab: 718124 kB' 'SReclaimable: 190104 kB' 'SUnreclaim: 528020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.926 node0=1024 expecting 1024 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.926 00:03:20.926 real 0m3.737s 00:03:20.926 user 0m1.372s 00:03:20.926 sys 0m2.319s 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:20.926 08:41:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:20.926 ************************************ 00:03:20.926 END TEST default_setup 00:03:20.926 ************************************ 00:03:20.926 08:41:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:20.926 08:41:43 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:20.926 08:41:43 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:20.926 08:41:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.926 ************************************ 00:03:20.926 START TEST per_node_1G_alloc 00:03:20.926 ************************************ 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.926 08:41:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.231 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:24.231 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.231 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105352272 kB' 'MemAvailable: 108615904 kB' 'Buffers: 2704 kB' 'Cached: 14346884 kB' 'SwapCached: 0 kB' 'Active: 11409436 kB' 'Inactive: 3514596 kB' 'Active(anon): 10997408 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577336 kB' 'Mapped: 210040 kB' 'Shmem: 10422964 kB' 'KReclaimable: 322680 kB' 'Slab: 1186468 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863788 kB' 'KernelStack: 27152 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12478108 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.503 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105358528 kB' 'MemAvailable: 108622160 kB' 'Buffers: 2704 kB' 'Cached: 14346888 kB' 'SwapCached: 0 kB' 'Active: 11408136 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996108 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576132 kB' 'Mapped: 210012 kB' 'Shmem: 10422968 kB' 'KReclaimable: 322680 kB' 'Slab: 1186368 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863688 kB' 'KernelStack: 27104 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12476252 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.504 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.505 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105358644 kB' 'MemAvailable: 108622276 kB' 'Buffers: 2704 kB' 'Cached: 14346908 kB' 'SwapCached: 0 kB' 'Active: 11407348 kB' 'Inactive: 3514596 kB' 'Active(anon): 10995320 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575688 kB' 'Mapped: 209932 kB' 'Shmem: 10422988 kB' 'KReclaimable: 322680 kB' 'Slab: 1186360 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863680 kB' 'KernelStack: 27088 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12476276 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.506 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.507 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.508 nr_hugepages=1024 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.508 resv_hugepages=0 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.508 surplus_hugepages=0 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.508 anon_hugepages=0 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105358752 kB' 'MemAvailable: 108622384 kB' 'Buffers: 2704 kB' 'Cached: 14346948 kB' 'SwapCached: 0 kB' 'Active: 11407076 kB' 'Inactive: 3514596 kB' 'Active(anon): 10995048 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575340 kB' 'Mapped: 209932 kB' 'Shmem: 10423028 kB' 'KReclaimable: 322680 kB' 'Slab: 1186360 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863680 kB' 'KernelStack: 27088 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12476300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.508 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.509 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51597192 kB' 'MemUsed: 14061816 kB' 'SwapCached: 0 kB' 'Active: 7004112 kB' 'Inactive: 3323512 kB' 'Active(anon): 6854872 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10114552 kB' 'Mapped: 55520 kB' 'AnonPages: 216256 kB' 'Shmem: 6641800 kB' 'KernelStack: 12280 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190104 kB' 'Slab: 718128 kB' 'SReclaimable: 190104 kB' 'SUnreclaim: 528024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.510 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.511 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53762640 kB' 'MemUsed: 6917200 kB' 'SwapCached: 0 kB' 'Active: 4403660 kB' 'Inactive: 191084 kB' 'Active(anon): 4140872 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4235128 kB' 'Mapped: 154420 kB' 'AnonPages: 359796 kB' 'Shmem: 3781256 kB' 'KernelStack: 14856 kB' 'PageTables: 4760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132576 kB' 'Slab: 468232 kB' 'SReclaimable: 132576 kB' 'SUnreclaim: 335656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.512 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.513 node0=512 expecting 512 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.513 node1=512 expecting 512 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.513 00:03:24.513 real 0m3.602s 00:03:24.513 user 0m1.410s 00:03:24.513 sys 0m2.221s 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:24.513 08:41:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.513 ************************************ 00:03:24.513 END TEST per_node_1G_alloc 00:03:24.513 ************************************ 00:03:24.513 08:41:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.513 08:41:47 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:24.513 08:41:47 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:24.513 08:41:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.775 ************************************ 00:03:24.775 START TEST even_2G_alloc 00:03:24.775 ************************************ 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.775 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.776 08:41:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.083 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:28.083 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:28.083 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105358804 kB' 'MemAvailable: 108622436 kB' 'Buffers: 2704 kB' 'Cached: 14347072 kB' 'SwapCached: 0 kB' 'Active: 11416672 kB' 'Inactive: 3514596 kB' 'Active(anon): 11004644 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583880 kB' 'Mapped: 210924 kB' 'Shmem: 10423152 kB' 'KReclaimable: 322680 kB' 'Slab: 1186348 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863668 kB' 'KernelStack: 27200 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12484428 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235512 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.348 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.349 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105361524 kB' 'MemAvailable: 108625156 kB' 'Buffers: 2704 kB' 'Cached: 14347084 kB' 'SwapCached: 0 kB' 'Active: 11409264 kB' 'Inactive: 3514596 kB' 'Active(anon): 10997236 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576956 kB' 'Mapped: 209964 kB' 'Shmem: 10423164 kB' 'KReclaimable: 322680 kB' 'Slab: 1186352 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863672 kB' 'KernelStack: 26960 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12478324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.350 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.351 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105361372 kB' 'MemAvailable: 108625004 kB' 'Buffers: 2704 kB' 'Cached: 14347092 kB' 'SwapCached: 0 kB' 'Active: 11409816 kB' 'Inactive: 3514596 kB' 'Active(anon): 10997788 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577980 kB' 'Mapped: 209972 kB' 'Shmem: 10423172 kB' 'KReclaimable: 322680 kB' 'Slab: 1186352 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863672 kB' 'KernelStack: 27120 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12478348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.352 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.353 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.354 nr_hugepages=1024 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.354 resv_hugepages=0 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.354 surplus_hugepages=0 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.354 anon_hugepages=0 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105361184 kB' 'MemAvailable: 108624816 kB' 'Buffers: 2704 kB' 'Cached: 14347092 kB' 'SwapCached: 0 kB' 'Active: 11410380 kB' 'Inactive: 3514596 kB' 'Active(anon): 10998352 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578084 kB' 'Mapped: 209972 kB' 'Shmem: 10423172 kB' 'KReclaimable: 322680 kB' 'Slab: 1185872 kB' 'SReclaimable: 322680 kB' 'SUnreclaim: 863192 kB' 'KernelStack: 27136 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12479976 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.354 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.355 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51598960 kB' 'MemUsed: 14060048 kB' 'SwapCached: 0 kB' 'Active: 7004748 kB' 'Inactive: 3323512 kB' 'Active(anon): 6855508 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10114560 kB' 'Mapped: 55552 kB' 'AnonPages: 216304 kB' 'Shmem: 6641808 kB' 'KernelStack: 12296 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190104 kB' 'Slab: 717740 kB' 'SReclaimable: 190104 kB' 'SUnreclaim: 527636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.356 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.357 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53765392 kB' 'MemUsed: 6914448 kB' 'SwapCached: 0 kB' 'Active: 4406008 kB' 'Inactive: 191084 kB' 'Active(anon): 4143220 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4235304 kB' 'Mapped: 154420 kB' 'AnonPages: 362048 kB' 'Shmem: 3781432 kB' 'KernelStack: 14920 kB' 'PageTables: 5004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132544 kB' 'Slab: 468100 kB' 'SReclaimable: 132544 kB' 'SUnreclaim: 335556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.620 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.621 node0=512 expecting 512 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:28.621 node1=512 expecting 512 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:28.621 00:03:28.621 real 0m3.847s 00:03:28.621 user 0m1.540s 00:03:28.621 sys 0m2.364s 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:28.621 08:41:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.621 ************************************ 00:03:28.621 END TEST even_2G_alloc 00:03:28.621 ************************************ 00:03:28.621 08:41:50 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:28.621 08:41:50 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:28.621 08:41:50 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:28.621 08:41:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.621 ************************************ 00:03:28.621 START TEST odd_alloc 00:03:28.621 ************************************ 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.621 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.622 08:41:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.927 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:31.927 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.927 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105371788 kB' 'MemAvailable: 108635404 kB' 'Buffers: 2704 kB' 'Cached: 14347244 kB' 'SwapCached: 0 kB' 'Active: 11408020 kB' 'Inactive: 3514596 kB' 'Active(anon): 10995992 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575884 kB' 'Mapped: 210016 kB' 'Shmem: 10423324 kB' 'KReclaimable: 322648 kB' 'Slab: 1186340 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863692 kB' 'KernelStack: 27200 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12478052 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105373252 kB' 'MemAvailable: 108636868 kB' 'Buffers: 2704 kB' 'Cached: 14347244 kB' 'SwapCached: 0 kB' 'Active: 11408028 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996000 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575912 kB' 'Mapped: 209976 kB' 'Shmem: 10423324 kB' 'KReclaimable: 322648 kB' 'Slab: 1186312 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863664 kB' 'KernelStack: 27168 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12478068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105372212 kB' 'MemAvailable: 108635828 kB' 'Buffers: 2704 kB' 'Cached: 14347264 kB' 'SwapCached: 0 kB' 'Active: 11408192 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996164 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576084 kB' 'Mapped: 209976 kB' 'Shmem: 10423344 kB' 'KReclaimable: 322648 kB' 'Slab: 1186312 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863664 kB' 'KernelStack: 27184 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12478088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:32.199 nr_hugepages=1025 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.199 resv_hugepages=0 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.199 surplus_hugepages=0 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.199 anon_hugepages=0 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105371544 kB' 'MemAvailable: 108635160 kB' 'Buffers: 2704 kB' 'Cached: 14347284 kB' 'SwapCached: 0 kB' 'Active: 11408212 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996184 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576084 kB' 'Mapped: 209976 kB' 'Shmem: 10423364 kB' 'KReclaimable: 322648 kB' 'Slab: 1186312 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863664 kB' 'KernelStack: 27184 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12478112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235396 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51594620 kB' 'MemUsed: 14064388 kB' 'SwapCached: 0 kB' 'Active: 7002740 kB' 'Inactive: 3323512 kB' 'Active(anon): 6853500 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10114564 kB' 'Mapped: 55560 kB' 'AnonPages: 214788 kB' 'Shmem: 6641812 kB' 'KernelStack: 12216 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190104 kB' 'Slab: 717896 kB' 'SReclaimable: 190104 kB' 'SUnreclaim: 527792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.202 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53776260 kB' 'MemUsed: 6903580 kB' 'SwapCached: 0 kB' 'Active: 4405700 kB' 'Inactive: 191084 kB' 'Active(anon): 4142912 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4235464 kB' 'Mapped: 154416 kB' 'AnonPages: 361560 kB' 'Shmem: 3781592 kB' 'KernelStack: 14984 kB' 'PageTables: 5236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132544 kB' 'Slab: 468416 kB' 'SReclaimable: 132544 kB' 'SUnreclaim: 335872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:32.204 node0=512 expecting 513 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:32.204 node1=513 expecting 512 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:32.204 00:03:32.204 real 0m3.670s 00:03:32.204 user 0m1.538s 00:03:32.204 sys 0m2.180s 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:32.204 08:41:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.204 ************************************ 00:03:32.204 END TEST odd_alloc 00:03:32.204 ************************************ 00:03:32.204 08:41:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:32.204 08:41:54 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:32.204 08:41:54 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:32.204 08:41:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.466 ************************************ 00:03:32.466 START TEST custom_alloc 00:03:32.466 ************************************ 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.466 08:41:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.777 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:35.777 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104349084 kB' 'MemAvailable: 107612700 kB' 'Buffers: 2704 kB' 'Cached: 14347416 kB' 'SwapCached: 0 kB' 'Active: 11409232 kB' 'Inactive: 3514596 kB' 'Active(anon): 10997204 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576416 kB' 'Mapped: 210116 kB' 'Shmem: 10423496 kB' 'KReclaimable: 322648 kB' 'Slab: 1185700 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863052 kB' 'KernelStack: 27120 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12479144 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:35.777 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104349820 kB' 'MemAvailable: 107613436 kB' 'Buffers: 2704 kB' 'Cached: 14347420 kB' 'SwapCached: 0 kB' 'Active: 11408900 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996872 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576556 kB' 'Mapped: 210028 kB' 'Shmem: 10423500 kB' 'KReclaimable: 322648 kB' 'Slab: 1185648 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863000 kB' 'KernelStack: 27104 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12479160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104350104 kB' 'MemAvailable: 107613720 kB' 'Buffers: 2704 kB' 'Cached: 14347424 kB' 'SwapCached: 0 kB' 'Active: 11408636 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996608 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576284 kB' 'Mapped: 210028 kB' 'Shmem: 10423504 kB' 'KReclaimable: 322648 kB' 'Slab: 1185648 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863000 kB' 'KernelStack: 27104 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12479184 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:35.783 nr_hugepages=1536 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.783 resv_hugepages=0 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.783 surplus_hugepages=0 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.783 anon_hugepages=0 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104349348 kB' 'MemAvailable: 107612964 kB' 'Buffers: 2704 kB' 'Cached: 14347460 kB' 'SwapCached: 0 kB' 'Active: 11408944 kB' 'Inactive: 3514596 kB' 'Active(anon): 10996916 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576560 kB' 'Mapped: 210028 kB' 'Shmem: 10423540 kB' 'KReclaimable: 322648 kB' 'Slab: 1185648 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863000 kB' 'KernelStack: 27104 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12479204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51624120 kB' 'MemUsed: 14034888 kB' 'SwapCached: 0 kB' 'Active: 7003180 kB' 'Inactive: 3323512 kB' 'Active(anon): 6853940 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10114644 kB' 'Mapped: 55612 kB' 'AnonPages: 215184 kB' 'Shmem: 6641892 kB' 'KernelStack: 12216 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190104 kB' 'Slab: 717532 kB' 'SReclaimable: 190104 kB' 'SUnreclaim: 527428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.785 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.786 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 52725912 kB' 'MemUsed: 7953928 kB' 'SwapCached: 0 kB' 'Active: 4405656 kB' 'Inactive: 191084 kB' 'Active(anon): 4142868 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4235540 kB' 'Mapped: 154416 kB' 'AnonPages: 361244 kB' 'Shmem: 3781668 kB' 'KernelStack: 14872 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132544 kB' 'Slab: 468116 kB' 'SReclaimable: 132544 kB' 'SUnreclaim: 335572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.787 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.788 node0=512 expecting 512 00:03:35.788 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.789 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.789 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.789 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:35.789 node1=1024 expecting 1024 00:03:35.789 08:41:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:35.789 00:03:35.789 real 0m3.504s 00:03:35.789 user 0m1.320s 00:03:35.789 sys 0m2.231s 00:03:35.789 08:41:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:35.789 08:41:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.789 ************************************ 00:03:35.789 END TEST custom_alloc 00:03:35.789 ************************************ 00:03:35.789 08:41:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:35.789 08:41:58 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:35.789 08:41:58 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:35.789 08:41:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.064 ************************************ 00:03:36.064 START TEST no_shrink_alloc 00:03:36.064 ************************************ 00:03:36.064 08:41:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.065 08:41:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.397 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:39.397 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.397 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105314824 kB' 'MemAvailable: 108578440 kB' 'Buffers: 2704 kB' 'Cached: 14347596 kB' 'SwapCached: 0 kB' 'Active: 11417320 kB' 'Inactive: 3514596 kB' 'Active(anon): 11005292 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585140 kB' 'Mapped: 211012 kB' 'Shmem: 10423676 kB' 'KReclaimable: 322648 kB' 'Slab: 1185852 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863204 kB' 'KernelStack: 27216 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12489128 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235400 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.398 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105321660 kB' 'MemAvailable: 108585276 kB' 'Buffers: 2704 kB' 'Cached: 14347600 kB' 'SwapCached: 0 kB' 'Active: 11411760 kB' 'Inactive: 3514596 kB' 'Active(anon): 10999732 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579596 kB' 'Mapped: 210396 kB' 'Shmem: 10423680 kB' 'KReclaimable: 322648 kB' 'Slab: 1185916 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863268 kB' 'KernelStack: 27264 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12484504 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:39.399 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.400 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.401 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105322048 kB' 'MemAvailable: 108585664 kB' 'Buffers: 2704 kB' 'Cached: 14347616 kB' 'SwapCached: 0 kB' 'Active: 11413216 kB' 'Inactive: 3514596 kB' 'Active(anon): 11001188 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580868 kB' 'Mapped: 210528 kB' 'Shmem: 10423696 kB' 'KReclaimable: 322648 kB' 'Slab: 1185916 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863268 kB' 'KernelStack: 27152 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12485752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.402 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.666 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.666 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.666 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.666 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.666 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.667 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.668 nr_hugepages=1024 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.668 resv_hugepages=0 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.668 surplus_hugepages=0 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.668 anon_hugepages=0 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105320844 kB' 'MemAvailable: 108584460 kB' 'Buffers: 2704 kB' 'Cached: 14347660 kB' 'SwapCached: 0 kB' 'Active: 11416424 kB' 'Inactive: 3514596 kB' 'Active(anon): 11004396 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584048 kB' 'Mapped: 210928 kB' 'Shmem: 10423740 kB' 'KReclaimable: 322648 kB' 'Slab: 1185916 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863268 kB' 'KernelStack: 27136 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12488828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235320 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.668 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.669 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.670 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50523412 kB' 'MemUsed: 15135596 kB' 'SwapCached: 0 kB' 'Active: 7010428 kB' 'Inactive: 3323512 kB' 'Active(anon): 6861188 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10114768 kB' 'Mapped: 56512 kB' 'AnonPages: 222476 kB' 'Shmem: 6642016 kB' 'KernelStack: 12296 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190104 kB' 'Slab: 717776 kB' 'SReclaimable: 190104 kB' 'SUnreclaim: 527672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.671 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.672 node0=1024 expecting 1024 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.672 08:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.978 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:42.978 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.978 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:43.245 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105325212 kB' 'MemAvailable: 108588828 kB' 'Buffers: 2704 kB' 'Cached: 14347752 kB' 'SwapCached: 0 kB' 'Active: 11418436 kB' 'Inactive: 3514596 kB' 'Active(anon): 11006408 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585412 kB' 'Mapped: 211028 kB' 'Shmem: 10423832 kB' 'KReclaimable: 322648 kB' 'Slab: 1186184 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863536 kB' 'KernelStack: 27168 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12490060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235480 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.245 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.246 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105326696 kB' 'MemAvailable: 108590312 kB' 'Buffers: 2704 kB' 'Cached: 14347756 kB' 'SwapCached: 0 kB' 'Active: 11417424 kB' 'Inactive: 3514596 kB' 'Active(anon): 11005396 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584896 kB' 'Mapped: 210952 kB' 'Shmem: 10423836 kB' 'KReclaimable: 322648 kB' 'Slab: 1186172 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863524 kB' 'KernelStack: 27120 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12490212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235432 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.247 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.248 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105326696 kB' 'MemAvailable: 108590312 kB' 'Buffers: 2704 kB' 'Cached: 14347788 kB' 'SwapCached: 0 kB' 'Active: 11418052 kB' 'Inactive: 3514596 kB' 'Active(anon): 11006024 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585556 kB' 'Mapped: 210952 kB' 'Shmem: 10423868 kB' 'KReclaimable: 322648 kB' 'Slab: 1186172 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863524 kB' 'KernelStack: 27200 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12511248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235448 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.249 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.250 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.251 nr_hugepages=1024 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.251 resv_hugepages=0 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.251 surplus_hugepages=0 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.251 anon_hugepages=0 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105327284 kB' 'MemAvailable: 108590900 kB' 'Buffers: 2704 kB' 'Cached: 14347812 kB' 'SwapCached: 0 kB' 'Active: 11418064 kB' 'Inactive: 3514596 kB' 'Active(anon): 11006036 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585576 kB' 'Mapped: 210952 kB' 'Shmem: 10423892 kB' 'KReclaimable: 322648 kB' 'Slab: 1186172 kB' 'SReclaimable: 322648 kB' 'SUnreclaim: 863524 kB' 'KernelStack: 27200 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12490628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235448 kB' 'VmallocChunk: 0 kB' 'Percpu: 126144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4463988 kB' 'DirectMap2M: 29818880 kB' 'DirectMap1G: 101711872 kB' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.251 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.252 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.253 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50544784 kB' 'MemUsed: 15114224 kB' 'SwapCached: 0 kB' 'Active: 7011280 kB' 'Inactive: 3323512 kB' 'Active(anon): 6862040 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10114864 kB' 'Mapped: 56536 kB' 'AnonPages: 223188 kB' 'Shmem: 6642112 kB' 'KernelStack: 12264 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190104 kB' 'Slab: 717988 kB' 'SReclaimable: 190104 kB' 'SUnreclaim: 527884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.254 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.255 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.517 node0=1024 expecting 1024 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.517 00:03:43.517 real 0m7.466s 00:03:43.517 user 0m2.964s 00:03:43.517 sys 0m4.608s 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:43.517 08:42:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.517 ************************************ 00:03:43.517 END TEST no_shrink_alloc 00:03:43.517 ************************************ 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.517 08:42:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.517 00:03:43.517 real 0m26.440s 00:03:43.517 user 0m10.383s 00:03:43.517 sys 0m16.329s 00:03:43.517 08:42:05 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:43.517 08:42:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.517 ************************************ 00:03:43.517 END TEST hugepages 00:03:43.517 ************************************ 00:03:43.517 08:42:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:43.517 08:42:05 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:43.517 08:42:05 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:43.517 08:42:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.517 ************************************ 00:03:43.517 START TEST driver 00:03:43.517 ************************************ 00:03:43.517 08:42:05 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:43.517 * Looking for test storage... 00:03:43.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.517 08:42:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:43.517 08:42:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.517 08:42:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.814 08:42:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:48.814 08:42:10 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:48.814 08:42:10 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:48.814 08:42:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.814 ************************************ 00:03:48.814 START TEST guess_driver 00:03:48.814 ************************************ 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:48.814 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:48.814 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:48.814 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:48.814 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:48.814 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:48.814 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:48.814 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:48.814 Looking for driver=vfio-pci 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.814 08:42:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.121 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.383 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:52.383 08:42:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:52.383 08:42:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.383 08:42:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.679 00:03:57.679 real 0m8.524s 00:03:57.679 user 0m2.901s 00:03:57.679 sys 0m4.833s 00:03:57.679 08:42:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:57.679 08:42:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.679 ************************************ 00:03:57.679 END TEST guess_driver 00:03:57.679 ************************************ 00:03:57.679 00:03:57.679 real 0m13.555s 00:03:57.679 user 0m4.457s 00:03:57.679 sys 0m7.510s 00:03:57.679 08:42:19 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:57.679 08:42:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.679 ************************************ 00:03:57.679 END TEST driver 00:03:57.679 ************************************ 00:03:57.679 08:42:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:57.679 08:42:19 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:57.679 08:42:19 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:57.679 08:42:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.680 ************************************ 00:03:57.680 START TEST devices 00:03:57.680 ************************************ 00:03:57.680 08:42:19 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:57.680 * Looking for test storage... 00:03:57.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:57.680 08:42:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:57.680 08:42:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:57.680 08:42:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.680 08:42:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:00.987 08:42:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:00.987 08:42:23 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:00.987 No valid GPT data, bailing 00:04:00.987 08:42:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.987 08:42:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:00.987 08:42:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:00.987 08:42:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:00.987 08:42:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:00.987 08:42:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:00.987 08:42:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:00.987 08:42:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:00.987 ************************************ 00:04:00.987 START TEST nvme_mount 00:04:00.987 ************************************ 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:00.987 08:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:01.930 Creating new GPT entries in memory. 00:04:01.930 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:01.930 other utilities. 00:04:01.930 08:42:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:01.930 08:42:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.930 08:42:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.930 08:42:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.930 08:42:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:02.911 Creating new GPT entries in memory. 00:04:02.911 The operation has completed successfully. 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2334877 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.911 08:42:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.213 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.213 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.475 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:06.475 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:06.475 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.475 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.475 08:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.784 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.046 08:42:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.351 08:42:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.613 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.613 00:04:13.613 real 0m12.754s 00:04:13.613 user 0m3.784s 00:04:13.613 sys 0m6.763s 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.613 08:42:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.613 ************************************ 00:04:13.613 END TEST nvme_mount 00:04:13.613 ************************************ 00:04:13.613 08:42:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.613 08:42:36 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:13.613 08:42:36 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.613 08:42:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.613 ************************************ 00:04:13.613 START TEST dm_mount 00:04:13.613 ************************************ 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.613 08:42:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:14.998 Creating new GPT entries in memory. 00:04:14.998 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.998 other utilities. 00:04:14.998 08:42:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.998 08:42:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.998 08:42:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.998 08:42:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.998 08:42:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:15.939 Creating new GPT entries in memory. 00:04:15.939 The operation has completed successfully. 00:04:15.939 08:42:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:15.939 08:42:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.940 08:42:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.940 08:42:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.940 08:42:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:16.882 The operation has completed successfully. 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2339900 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:16.882 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.883 08:42:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.186 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.447 08:42:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.749 08:42:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:23.749 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:23.749 00:04:23.749 real 0m10.101s 00:04:23.749 user 0m2.591s 00:04:23.749 sys 0m4.540s 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:23.749 08:42:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:23.749 ************************************ 00:04:23.749 END TEST dm_mount 00:04:23.749 ************************************ 00:04:23.749 08:42:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:23.749 08:42:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:23.749 08:42:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.749 08:42:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.749 08:42:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:23.749 08:42:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.749 08:42:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.010 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.010 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.010 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.010 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.010 08:42:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:24.010 08:42:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.010 08:42:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.010 08:42:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.010 08:42:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.010 08:42:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.010 08:42:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:24.010 00:04:24.010 real 0m26.963s 00:04:24.010 user 0m7.690s 00:04:24.010 sys 0m13.894s 00:04:24.010 08:42:46 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.010 08:42:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.010 ************************************ 00:04:24.010 END TEST devices 00:04:24.010 ************************************ 00:04:24.010 00:04:24.010 real 1m32.549s 00:04:24.010 user 0m30.923s 00:04:24.010 sys 0m52.665s 00:04:24.010 08:42:46 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.010 08:42:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.010 ************************************ 00:04:24.010 END TEST setup.sh 00:04:24.010 ************************************ 00:04:24.272 08:42:46 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:27.606 Hugepages 00:04:27.606 node hugesize free / total 00:04:27.606 node0 1048576kB 0 / 0 00:04:27.606 node0 2048kB 2048 / 2048 00:04:27.606 node1 1048576kB 0 / 0 00:04:27.606 node1 2048kB 0 / 0 00:04:27.606 00:04:27.606 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.606 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:27.606 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:27.606 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:27.606 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:27.606 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:27.606 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:27.606 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:27.606 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:27.606 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:27.606 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:27.606 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:27.606 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:27.606 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:27.606 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:27.606 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:27.606 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:27.606 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:27.606 08:42:50 -- spdk/autotest.sh@130 -- # uname -s 00:04:27.606 08:42:50 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:27.606 08:42:50 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:27.606 08:42:50 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.906 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:30.906 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:31.167 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:31.167 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:31.167 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:31.167 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:33.080 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:33.080 08:42:55 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:34.022 08:42:56 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:34.022 08:42:56 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:34.022 08:42:56 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.022 08:42:56 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:34.022 08:42:56 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:34.022 08:42:56 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:34.022 08:42:56 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.022 08:42:56 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.022 08:42:56 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:34.284 08:42:56 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:34.284 08:42:56 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:34.284 08:42:56 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.652 Waiting for block devices as requested 00:04:37.652 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:37.652 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:37.652 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:37.652 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:37.652 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:37.914 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:37.914 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:37.914 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:38.176 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:38.176 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:38.438 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:38.438 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:38.438 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:38.438 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:38.699 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:38.699 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:38.699 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:38.962 08:43:01 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:38.962 08:43:01 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:38.962 08:43:01 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:04:38.962 08:43:01 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:04:38.962 08:43:01 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:38.962 08:43:01 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:38.962 08:43:01 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:38.962 08:43:01 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:38.962 08:43:01 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:38.962 08:43:01 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:38.962 08:43:01 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:38.962 08:43:01 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:38.962 08:43:01 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:38.962 08:43:01 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:04:38.962 08:43:01 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:38.962 08:43:01 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:38.962 08:43:01 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:38.962 08:43:01 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:38.962 08:43:01 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:38.962 08:43:01 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:38.962 08:43:01 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:38.962 08:43:01 -- common/autotest_common.sh@1556 -- # continue 00:04:38.962 08:43:01 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:38.962 08:43:01 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:38.962 08:43:01 -- common/autotest_common.sh@10 -- # set +x 00:04:39.224 08:43:01 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:39.224 08:43:01 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:39.224 08:43:01 -- common/autotest_common.sh@10 -- # set +x 00:04:39.224 08:43:01 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.531 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.531 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:42.793 08:43:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:42.793 08:43:05 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:42.793 08:43:05 -- common/autotest_common.sh@10 -- # set +x 00:04:42.793 08:43:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:42.793 08:43:05 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:42.793 08:43:05 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:42.793 08:43:05 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:42.793 08:43:05 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:42.793 08:43:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:42.793 08:43:05 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:42.793 08:43:05 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:42.793 08:43:05 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:42.793 08:43:05 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:42.793 08:43:05 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:42.793 08:43:05 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:42.793 08:43:05 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:42.793 08:43:05 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:42.793 08:43:05 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:42.793 08:43:05 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:04:42.793 08:43:05 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:42.793 08:43:05 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:04:42.793 08:43:05 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:04:42.793 08:43:05 -- common/autotest_common.sh@1592 -- # return 0 00:04:42.793 08:43:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:42.793 08:43:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:42.793 08:43:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.793 08:43:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.793 08:43:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:42.793 08:43:05 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:42.793 08:43:05 -- common/autotest_common.sh@10 -- # set +x 00:04:42.793 08:43:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:42.793 08:43:05 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:42.793 08:43:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:42.793 08:43:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:42.793 08:43:05 -- common/autotest_common.sh@10 -- # set +x 00:04:42.793 ************************************ 00:04:42.793 START TEST env 00:04:42.793 ************************************ 00:04:42.793 08:43:05 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.056 * Looking for test storage... 00:04:43.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:43.056 08:43:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.056 08:43:05 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:43.056 08:43:05 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.056 08:43:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.056 ************************************ 00:04:43.056 START TEST env_memory 00:04:43.056 ************************************ 00:04:43.056 08:43:05 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.056 00:04:43.056 00:04:43.056 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.056 http://cunit.sourceforge.net/ 00:04:43.056 00:04:43.056 00:04:43.056 Suite: memory 00:04:43.056 Test: alloc and free memory map ...[2024-06-09 08:43:05.541492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.056 passed 00:04:43.056 Test: mem map translation ...[2024-06-09 08:43:05.567147] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.056 [2024-06-09 08:43:05.567179] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.056 [2024-06-09 08:43:05.567227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.056 [2024-06-09 08:43:05.567235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.056 passed 00:04:43.319 Test: mem map registration ...[2024-06-09 08:43:05.622586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:43.319 [2024-06-09 08:43:05.622611] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:43.319 passed 00:04:43.319 Test: mem map adjacent registrations ...passed 00:04:43.319 00:04:43.319 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.319 suites 1 1 n/a 0 0 00:04:43.319 tests 4 4 4 0 0 00:04:43.319 asserts 152 152 152 0 n/a 00:04:43.319 00:04:43.319 Elapsed time = 0.195 seconds 00:04:43.319 00:04:43.319 real 0m0.209s 00:04:43.319 user 0m0.196s 00:04:43.319 sys 0m0.012s 00:04:43.319 08:43:05 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.319 08:43:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 ************************************ 00:04:43.319 END TEST env_memory 00:04:43.319 ************************************ 00:04:43.319 08:43:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.319 08:43:05 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:43.319 08:43:05 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.319 08:43:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 ************************************ 00:04:43.319 START TEST env_vtophys 00:04:43.319 ************************************ 00:04:43.319 08:43:05 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.319 EAL: lib.eal log level changed from notice to debug 00:04:43.319 EAL: Detected lcore 0 as core 0 on socket 0 00:04:43.319 EAL: Detected lcore 1 as core 1 on socket 0 00:04:43.319 EAL: Detected lcore 2 as core 2 on socket 0 00:04:43.319 EAL: Detected lcore 3 as core 3 on socket 0 00:04:43.319 EAL: Detected lcore 4 as core 4 on socket 0 00:04:43.319 EAL: Detected lcore 5 as core 5 on socket 0 00:04:43.319 EAL: Detected lcore 6 as core 6 on socket 0 00:04:43.319 EAL: Detected lcore 7 as core 7 on socket 0 00:04:43.319 EAL: Detected lcore 8 as core 8 on socket 0 00:04:43.319 EAL: Detected lcore 9 as core 9 on socket 0 00:04:43.319 EAL: Detected lcore 10 as core 10 on socket 0 00:04:43.319 EAL: Detected lcore 11 as core 11 on socket 0 00:04:43.319 EAL: Detected lcore 12 as core 12 on socket 0 00:04:43.319 EAL: Detected lcore 13 as core 13 on socket 0 00:04:43.319 EAL: Detected lcore 14 as core 14 on socket 0 00:04:43.319 EAL: Detected lcore 15 as core 15 on socket 0 00:04:43.319 EAL: Detected lcore 16 as core 16 on socket 0 00:04:43.319 EAL: Detected lcore 17 as core 17 on socket 0 00:04:43.319 EAL: Detected lcore 18 as core 18 on socket 0 00:04:43.319 EAL: Detected lcore 19 as core 19 on socket 0 00:04:43.319 EAL: Detected lcore 20 as core 20 on socket 0 00:04:43.319 EAL: Detected lcore 21 as core 21 on socket 0 00:04:43.319 EAL: Detected lcore 22 as core 22 on socket 0 00:04:43.319 EAL: Detected lcore 23 as core 23 on socket 0 00:04:43.319 EAL: Detected lcore 24 as core 24 on socket 0 00:04:43.319 EAL: Detected lcore 25 as core 25 on socket 0 00:04:43.319 EAL: Detected lcore 26 as core 26 on socket 0 00:04:43.319 EAL: Detected lcore 27 as core 27 on socket 0 00:04:43.319 EAL: Detected lcore 28 as core 28 on socket 0 00:04:43.319 EAL: Detected lcore 29 as core 29 on socket 0 00:04:43.319 EAL: Detected lcore 30 as core 30 on socket 0 00:04:43.319 EAL: Detected lcore 31 as core 31 on socket 0 00:04:43.319 EAL: Detected lcore 32 as core 32 on socket 0 00:04:43.319 EAL: Detected lcore 33 as core 33 on socket 0 00:04:43.319 EAL: Detected lcore 34 as core 34 on socket 0 00:04:43.319 EAL: Detected lcore 35 as core 35 on socket 0 00:04:43.319 EAL: Detected lcore 36 as core 0 on socket 1 00:04:43.319 EAL: Detected lcore 37 as core 1 on socket 1 00:04:43.319 EAL: Detected lcore 38 as core 2 on socket 1 00:04:43.319 EAL: Detected lcore 39 as core 3 on socket 1 00:04:43.319 EAL: Detected lcore 40 as core 4 on socket 1 00:04:43.319 EAL: Detected lcore 41 as core 5 on socket 1 00:04:43.319 EAL: Detected lcore 42 as core 6 on socket 1 00:04:43.319 EAL: Detected lcore 43 as core 7 on socket 1 00:04:43.319 EAL: Detected lcore 44 as core 8 on socket 1 00:04:43.319 EAL: Detected lcore 45 as core 9 on socket 1 00:04:43.319 EAL: Detected lcore 46 as core 10 on socket 1 00:04:43.319 EAL: Detected lcore 47 as core 11 on socket 1 00:04:43.319 EAL: Detected lcore 48 as core 12 on socket 1 00:04:43.319 EAL: Detected lcore 49 as core 13 on socket 1 00:04:43.319 EAL: Detected lcore 50 as core 14 on socket 1 00:04:43.319 EAL: Detected lcore 51 as core 15 on socket 1 00:04:43.319 EAL: Detected lcore 52 as core 16 on socket 1 00:04:43.319 EAL: Detected lcore 53 as core 17 on socket 1 00:04:43.319 EAL: Detected lcore 54 as core 18 on socket 1 00:04:43.319 EAL: Detected lcore 55 as core 19 on socket 1 00:04:43.319 EAL: Detected lcore 56 as core 20 on socket 1 00:04:43.319 EAL: Detected lcore 57 as core 21 on socket 1 00:04:43.319 EAL: Detected lcore 58 as core 22 on socket 1 00:04:43.319 EAL: Detected lcore 59 as core 23 on socket 1 00:04:43.319 EAL: Detected lcore 60 as core 24 on socket 1 00:04:43.319 EAL: Detected lcore 61 as core 25 on socket 1 00:04:43.319 EAL: Detected lcore 62 as core 26 on socket 1 00:04:43.319 EAL: Detected lcore 63 as core 27 on socket 1 00:04:43.319 EAL: Detected lcore 64 as core 28 on socket 1 00:04:43.319 EAL: Detected lcore 65 as core 29 on socket 1 00:04:43.319 EAL: Detected lcore 66 as core 30 on socket 1 00:04:43.319 EAL: Detected lcore 67 as core 31 on socket 1 00:04:43.319 EAL: Detected lcore 68 as core 32 on socket 1 00:04:43.319 EAL: Detected lcore 69 as core 33 on socket 1 00:04:43.319 EAL: Detected lcore 70 as core 34 on socket 1 00:04:43.319 EAL: Detected lcore 71 as core 35 on socket 1 00:04:43.319 EAL: Detected lcore 72 as core 0 on socket 0 00:04:43.319 EAL: Detected lcore 73 as core 1 on socket 0 00:04:43.320 EAL: Detected lcore 74 as core 2 on socket 0 00:04:43.320 EAL: Detected lcore 75 as core 3 on socket 0 00:04:43.320 EAL: Detected lcore 76 as core 4 on socket 0 00:04:43.320 EAL: Detected lcore 77 as core 5 on socket 0 00:04:43.320 EAL: Detected lcore 78 as core 6 on socket 0 00:04:43.320 EAL: Detected lcore 79 as core 7 on socket 0 00:04:43.320 EAL: Detected lcore 80 as core 8 on socket 0 00:04:43.320 EAL: Detected lcore 81 as core 9 on socket 0 00:04:43.320 EAL: Detected lcore 82 as core 10 on socket 0 00:04:43.320 EAL: Detected lcore 83 as core 11 on socket 0 00:04:43.320 EAL: Detected lcore 84 as core 12 on socket 0 00:04:43.320 EAL: Detected lcore 85 as core 13 on socket 0 00:04:43.320 EAL: Detected lcore 86 as core 14 on socket 0 00:04:43.320 EAL: Detected lcore 87 as core 15 on socket 0 00:04:43.320 EAL: Detected lcore 88 as core 16 on socket 0 00:04:43.320 EAL: Detected lcore 89 as core 17 on socket 0 00:04:43.320 EAL: Detected lcore 90 as core 18 on socket 0 00:04:43.320 EAL: Detected lcore 91 as core 19 on socket 0 00:04:43.320 EAL: Detected lcore 92 as core 20 on socket 0 00:04:43.320 EAL: Detected lcore 93 as core 21 on socket 0 00:04:43.320 EAL: Detected lcore 94 as core 22 on socket 0 00:04:43.320 EAL: Detected lcore 95 as core 23 on socket 0 00:04:43.320 EAL: Detected lcore 96 as core 24 on socket 0 00:04:43.320 EAL: Detected lcore 97 as core 25 on socket 0 00:04:43.320 EAL: Detected lcore 98 as core 26 on socket 0 00:04:43.320 EAL: Detected lcore 99 as core 27 on socket 0 00:04:43.320 EAL: Detected lcore 100 as core 28 on socket 0 00:04:43.320 EAL: Detected lcore 101 as core 29 on socket 0 00:04:43.320 EAL: Detected lcore 102 as core 30 on socket 0 00:04:43.320 EAL: Detected lcore 103 as core 31 on socket 0 00:04:43.320 EAL: Detected lcore 104 as core 32 on socket 0 00:04:43.320 EAL: Detected lcore 105 as core 33 on socket 0 00:04:43.320 EAL: Detected lcore 106 as core 34 on socket 0 00:04:43.320 EAL: Detected lcore 107 as core 35 on socket 0 00:04:43.320 EAL: Detected lcore 108 as core 0 on socket 1 00:04:43.320 EAL: Detected lcore 109 as core 1 on socket 1 00:04:43.320 EAL: Detected lcore 110 as core 2 on socket 1 00:04:43.320 EAL: Detected lcore 111 as core 3 on socket 1 00:04:43.320 EAL: Detected lcore 112 as core 4 on socket 1 00:04:43.320 EAL: Detected lcore 113 as core 5 on socket 1 00:04:43.320 EAL: Detected lcore 114 as core 6 on socket 1 00:04:43.320 EAL: Detected lcore 115 as core 7 on socket 1 00:04:43.320 EAL: Detected lcore 116 as core 8 on socket 1 00:04:43.320 EAL: Detected lcore 117 as core 9 on socket 1 00:04:43.320 EAL: Detected lcore 118 as core 10 on socket 1 00:04:43.320 EAL: Detected lcore 119 as core 11 on socket 1 00:04:43.320 EAL: Detected lcore 120 as core 12 on socket 1 00:04:43.320 EAL: Detected lcore 121 as core 13 on socket 1 00:04:43.320 EAL: Detected lcore 122 as core 14 on socket 1 00:04:43.320 EAL: Detected lcore 123 as core 15 on socket 1 00:04:43.320 EAL: Detected lcore 124 as core 16 on socket 1 00:04:43.320 EAL: Detected lcore 125 as core 17 on socket 1 00:04:43.320 EAL: Detected lcore 126 as core 18 on socket 1 00:04:43.320 EAL: Detected lcore 127 as core 19 on socket 1 00:04:43.320 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:43.320 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:43.320 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:43.320 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:43.320 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:43.320 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:43.320 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:43.320 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:43.320 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:43.320 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:43.320 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:43.320 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:43.320 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:43.320 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:43.320 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:43.320 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:43.320 EAL: Maximum logical cores by configuration: 128 00:04:43.320 EAL: Detected CPU lcores: 128 00:04:43.320 EAL: Detected NUMA nodes: 2 00:04:43.320 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:43.320 EAL: Detected shared linkage of DPDK 00:04:43.320 EAL: No shared files mode enabled, IPC will be disabled 00:04:43.320 EAL: Bus pci wants IOVA as 'DC' 00:04:43.320 EAL: Buses did not request a specific IOVA mode. 00:04:43.320 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:43.320 EAL: Selected IOVA mode 'VA' 00:04:43.320 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.320 EAL: Probing VFIO support... 00:04:43.320 EAL: IOMMU type 1 (Type 1) is supported 00:04:43.320 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:43.320 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:43.320 EAL: VFIO support initialized 00:04:43.320 EAL: Ask a virtual area of 0x2e000 bytes 00:04:43.320 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:43.320 EAL: Setting up physically contiguous memory... 00:04:43.320 EAL: Setting maximum number of open files to 524288 00:04:43.320 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:43.320 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:43.320 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:43.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.320 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:43.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.320 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:43.320 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:43.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.320 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:43.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.320 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:43.320 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:43.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.320 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:43.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.320 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:43.320 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:43.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.320 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:43.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.320 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:43.320 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:43.320 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:43.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.320 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:43.320 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.320 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:43.320 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:43.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.320 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:43.320 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.320 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:43.320 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:43.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.320 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:43.320 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.320 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:43.320 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:43.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.320 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:43.320 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.320 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:43.320 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:43.320 EAL: Hugepages will be freed exactly as allocated. 00:04:43.320 EAL: No shared files mode enabled, IPC is disabled 00:04:43.320 EAL: No shared files mode enabled, IPC is disabled 00:04:43.320 EAL: TSC frequency is ~2400000 KHz 00:04:43.320 EAL: Main lcore 0 is ready (tid=7ff005d74a00;cpuset=[0]) 00:04:43.320 EAL: Trying to obtain current memory policy. 00:04:43.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.321 EAL: Restoring previous memory policy: 0 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was expanded by 2MB 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:43.321 EAL: Mem event callback 'spdk:(nil)' registered 00:04:43.321 00:04:43.321 00:04:43.321 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.321 http://cunit.sourceforge.net/ 00:04:43.321 00:04:43.321 00:04:43.321 Suite: components_suite 00:04:43.321 Test: vtophys_malloc_test ...passed 00:04:43.321 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.321 EAL: Restoring previous memory policy: 4 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.321 EAL: Trying to obtain current memory policy. 00:04:43.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.321 EAL: Restoring previous memory policy: 4 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.321 EAL: Trying to obtain current memory policy. 00:04:43.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.321 EAL: Restoring previous memory policy: 4 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.321 EAL: Trying to obtain current memory policy. 00:04:43.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.321 EAL: Restoring previous memory policy: 4 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.321 EAL: Trying to obtain current memory policy. 00:04:43.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.321 EAL: Restoring previous memory policy: 4 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.321 EAL: request: mp_malloc_sync 00:04:43.321 EAL: No shared files mode enabled, IPC is disabled 00:04:43.321 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.583 EAL: request: mp_malloc_sync 00:04:43.583 EAL: No shared files mode enabled, IPC is disabled 00:04:43.583 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.583 EAL: Trying to obtain current memory policy. 00:04:43.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.583 EAL: Restoring previous memory policy: 4 00:04:43.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.583 EAL: request: mp_malloc_sync 00:04:43.583 EAL: No shared files mode enabled, IPC is disabled 00:04:43.583 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.583 EAL: request: mp_malloc_sync 00:04:43.583 EAL: No shared files mode enabled, IPC is disabled 00:04:43.583 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.583 EAL: Trying to obtain current memory policy. 00:04:43.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.583 EAL: Restoring previous memory policy: 4 00:04:43.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.583 EAL: request: mp_malloc_sync 00:04:43.583 EAL: No shared files mode enabled, IPC is disabled 00:04:43.583 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.583 EAL: request: mp_malloc_sync 00:04:43.583 EAL: No shared files mode enabled, IPC is disabled 00:04:43.583 EAL: Heap on socket 0 was shrunk by 130MB 00:04:43.583 EAL: Trying to obtain current memory policy. 00:04:43.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.583 EAL: Restoring previous memory policy: 4 00:04:43.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.583 EAL: request: mp_malloc_sync 00:04:43.583 EAL: No shared files mode enabled, IPC is disabled 00:04:43.583 EAL: Heap on socket 0 was expanded by 258MB 00:04:43.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.583 EAL: request: mp_malloc_sync 00:04:43.583 EAL: No shared files mode enabled, IPC is disabled 00:04:43.583 EAL: Heap on socket 0 was shrunk by 258MB 00:04:43.583 EAL: Trying to obtain current memory policy. 00:04:43.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.583 EAL: Restoring previous memory policy: 4 00:04:43.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.583 EAL: request: mp_malloc_sync 00:04:43.583 EAL: No shared files mode enabled, IPC is disabled 00:04:43.583 EAL: Heap on socket 0 was expanded by 514MB 00:04:43.844 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.844 EAL: request: mp_malloc_sync 00:04:43.844 EAL: No shared files mode enabled, IPC is disabled 00:04:43.844 EAL: Heap on socket 0 was shrunk by 514MB 00:04:43.844 EAL: Trying to obtain current memory policy. 00:04:43.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.844 EAL: Restoring previous memory policy: 4 00:04:43.844 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.844 EAL: request: mp_malloc_sync 00:04:43.844 EAL: No shared files mode enabled, IPC is disabled 00:04:43.844 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.106 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.106 EAL: request: mp_malloc_sync 00:04:44.106 EAL: No shared files mode enabled, IPC is disabled 00:04:44.106 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.106 passed 00:04:44.106 00:04:44.106 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.106 suites 1 1 n/a 0 0 00:04:44.106 tests 2 2 2 0 0 00:04:44.106 asserts 497 497 497 0 n/a 00:04:44.106 00:04:44.106 Elapsed time = 0.644 seconds 00:04:44.106 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.106 EAL: request: mp_malloc_sync 00:04:44.106 EAL: No shared files mode enabled, IPC is disabled 00:04:44.106 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.106 EAL: No shared files mode enabled, IPC is disabled 00:04:44.106 EAL: No shared files mode enabled, IPC is disabled 00:04:44.106 EAL: No shared files mode enabled, IPC is disabled 00:04:44.106 00:04:44.106 real 0m0.762s 00:04:44.106 user 0m0.401s 00:04:44.106 sys 0m0.331s 00:04:44.106 08:43:06 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:44.106 08:43:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:44.106 ************************************ 00:04:44.106 END TEST env_vtophys 00:04:44.106 ************************************ 00:04:44.106 08:43:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.106 08:43:06 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:44.106 08:43:06 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:44.106 08:43:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.106 ************************************ 00:04:44.106 START TEST env_pci 00:04:44.106 ************************************ 00:04:44.106 08:43:06 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.106 00:04:44.106 00:04:44.106 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.106 http://cunit.sourceforge.net/ 00:04:44.106 00:04:44.106 00:04:44.106 Suite: pci 00:04:44.106 Test: pci_hook ...[2024-06-09 08:43:06.632540] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2350948 has claimed it 00:04:44.106 EAL: Cannot find device (10000:00:01.0) 00:04:44.106 EAL: Failed to attach device on primary process 00:04:44.106 passed 00:04:44.106 00:04:44.106 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.106 suites 1 1 n/a 0 0 00:04:44.106 tests 1 1 1 0 0 00:04:44.106 asserts 25 25 25 0 n/a 00:04:44.106 00:04:44.106 Elapsed time = 0.029 seconds 00:04:44.106 00:04:44.106 real 0m0.050s 00:04:44.106 user 0m0.018s 00:04:44.106 sys 0m0.031s 00:04:44.106 08:43:06 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:44.368 08:43:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:44.368 ************************************ 00:04:44.368 END TEST env_pci 00:04:44.368 ************************************ 00:04:44.368 08:43:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.368 08:43:06 env -- env/env.sh@15 -- # uname 00:04:44.368 08:43:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.368 08:43:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.368 08:43:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.368 08:43:06 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:44.368 08:43:06 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:44.368 08:43:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.369 ************************************ 00:04:44.369 START TEST env_dpdk_post_init 00:04:44.369 ************************************ 00:04:44.369 08:43:06 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.369 EAL: Detected CPU lcores: 128 00:04:44.369 EAL: Detected NUMA nodes: 2 00:04:44.369 EAL: Detected shared linkage of DPDK 00:04:44.369 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.369 EAL: Selected IOVA mode 'VA' 00:04:44.369 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.369 EAL: VFIO support initialized 00:04:44.369 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.369 EAL: Using IOMMU type 1 (Type 1) 00:04:44.630 EAL: Ignore mapping IO port bar(1) 00:04:44.630 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:44.892 EAL: Ignore mapping IO port bar(1) 00:04:44.892 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:44.892 EAL: Ignore mapping IO port bar(1) 00:04:45.180 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:45.180 EAL: Ignore mapping IO port bar(1) 00:04:45.499 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:45.499 EAL: Ignore mapping IO port bar(1) 00:04:45.499 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:45.761 EAL: Ignore mapping IO port bar(1) 00:04:45.761 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:46.022 EAL: Ignore mapping IO port bar(1) 00:04:46.023 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:46.023 EAL: Ignore mapping IO port bar(1) 00:04:46.284 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:46.545 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:46.545 EAL: Ignore mapping IO port bar(1) 00:04:46.545 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:46.806 EAL: Ignore mapping IO port bar(1) 00:04:46.806 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:47.067 EAL: Ignore mapping IO port bar(1) 00:04:47.067 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:47.328 EAL: Ignore mapping IO port bar(1) 00:04:47.328 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:47.590 EAL: Ignore mapping IO port bar(1) 00:04:47.590 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:47.590 EAL: Ignore mapping IO port bar(1) 00:04:47.852 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:47.852 EAL: Ignore mapping IO port bar(1) 00:04:48.113 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:48.113 EAL: Ignore mapping IO port bar(1) 00:04:48.113 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:48.375 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:48.375 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:48.375 Starting DPDK initialization... 00:04:48.375 Starting SPDK post initialization... 00:04:48.375 SPDK NVMe probe 00:04:48.375 Attaching to 0000:65:00.0 00:04:48.375 Attached to 0000:65:00.0 00:04:48.375 Cleaning up... 00:04:50.292 00:04:50.292 real 0m5.718s 00:04:50.292 user 0m0.185s 00:04:50.292 sys 0m0.075s 00:04:50.292 08:43:12 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:50.292 08:43:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.292 ************************************ 00:04:50.292 END TEST env_dpdk_post_init 00:04:50.292 ************************************ 00:04:50.292 08:43:12 env -- env/env.sh@26 -- # uname 00:04:50.292 08:43:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:50.292 08:43:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:50.292 08:43:12 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:50.292 08:43:12 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:50.292 08:43:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.292 ************************************ 00:04:50.292 START TEST env_mem_callbacks 00:04:50.292 ************************************ 00:04:50.292 08:43:12 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:50.292 EAL: Detected CPU lcores: 128 00:04:50.292 EAL: Detected NUMA nodes: 2 00:04:50.292 EAL: Detected shared linkage of DPDK 00:04:50.292 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.292 EAL: Selected IOVA mode 'VA' 00:04:50.292 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.292 EAL: VFIO support initialized 00:04:50.292 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.292 00:04:50.292 00:04:50.292 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.292 http://cunit.sourceforge.net/ 00:04:50.292 00:04:50.292 00:04:50.292 Suite: memory 00:04:50.292 Test: test ... 00:04:50.292 register 0x200000200000 2097152 00:04:50.292 malloc 3145728 00:04:50.292 register 0x200000400000 4194304 00:04:50.292 buf 0x200000500000 len 3145728 PASSED 00:04:50.292 malloc 64 00:04:50.292 buf 0x2000004fff40 len 64 PASSED 00:04:50.292 malloc 4194304 00:04:50.292 register 0x200000800000 6291456 00:04:50.292 buf 0x200000a00000 len 4194304 PASSED 00:04:50.292 free 0x200000500000 3145728 00:04:50.292 free 0x2000004fff40 64 00:04:50.292 unregister 0x200000400000 4194304 PASSED 00:04:50.292 free 0x200000a00000 4194304 00:04:50.292 unregister 0x200000800000 6291456 PASSED 00:04:50.292 malloc 8388608 00:04:50.292 register 0x200000400000 10485760 00:04:50.292 buf 0x200000600000 len 8388608 PASSED 00:04:50.292 free 0x200000600000 8388608 00:04:50.292 unregister 0x200000400000 10485760 PASSED 00:04:50.292 passed 00:04:50.292 00:04:50.292 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.292 suites 1 1 n/a 0 0 00:04:50.292 tests 1 1 1 0 0 00:04:50.292 asserts 15 15 15 0 n/a 00:04:50.292 00:04:50.292 Elapsed time = 0.005 seconds 00:04:50.292 00:04:50.292 real 0m0.058s 00:04:50.292 user 0m0.019s 00:04:50.292 sys 0m0.039s 00:04:50.292 08:43:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:50.292 08:43:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:50.292 ************************************ 00:04:50.292 END TEST env_mem_callbacks 00:04:50.292 ************************************ 00:04:50.292 00:04:50.292 real 0m7.295s 00:04:50.292 user 0m1.005s 00:04:50.292 sys 0m0.827s 00:04:50.292 08:43:12 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:50.292 08:43:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.292 ************************************ 00:04:50.292 END TEST env 00:04:50.292 ************************************ 00:04:50.292 08:43:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:50.292 08:43:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:50.292 08:43:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:50.292 08:43:12 -- common/autotest_common.sh@10 -- # set +x 00:04:50.292 ************************************ 00:04:50.292 START TEST rpc 00:04:50.292 ************************************ 00:04:50.292 08:43:12 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:50.292 * Looking for test storage... 00:04:50.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.292 08:43:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2352314 00:04:50.292 08:43:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.292 08:43:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:50.292 08:43:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2352314 00:04:50.292 08:43:12 rpc -- common/autotest_common.sh@830 -- # '[' -z 2352314 ']' 00:04:50.292 08:43:12 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.292 08:43:12 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:50.292 08:43:12 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.292 08:43:12 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:50.292 08:43:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.553 [2024-06-09 08:43:12.887436] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:50.553 [2024-06-09 08:43:12.887509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352314 ] 00:04:50.553 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.553 [2024-06-09 08:43:12.954063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.553 [2024-06-09 08:43:13.027562] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:50.553 [2024-06-09 08:43:13.027602] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2352314' to capture a snapshot of events at runtime. 00:04:50.554 [2024-06-09 08:43:13.027609] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:50.554 [2024-06-09 08:43:13.027616] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:50.554 [2024-06-09 08:43:13.027621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2352314 for offline analysis/debug. 00:04:50.554 [2024-06-09 08:43:13.027644] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.124 08:43:13 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:51.124 08:43:13 rpc -- common/autotest_common.sh@863 -- # return 0 00:04:51.124 08:43:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.124 08:43:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.124 08:43:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:51.124 08:43:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:51.125 08:43:13 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:51.125 08:43:13 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:51.125 08:43:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.385 ************************************ 00:04:51.385 START TEST rpc_integrity 00:04:51.385 ************************************ 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.385 { 00:04:51.385 "name": "Malloc0", 00:04:51.385 "aliases": [ 00:04:51.385 "76a2d9f4-9a60-45e2-bcce-912c3c1bb3c4" 00:04:51.385 ], 00:04:51.385 "product_name": "Malloc disk", 00:04:51.385 "block_size": 512, 00:04:51.385 "num_blocks": 16384, 00:04:51.385 "uuid": "76a2d9f4-9a60-45e2-bcce-912c3c1bb3c4", 00:04:51.385 "assigned_rate_limits": { 00:04:51.385 "rw_ios_per_sec": 0, 00:04:51.385 "rw_mbytes_per_sec": 0, 00:04:51.385 "r_mbytes_per_sec": 0, 00:04:51.385 "w_mbytes_per_sec": 0 00:04:51.385 }, 00:04:51.385 "claimed": false, 00:04:51.385 "zoned": false, 00:04:51.385 "supported_io_types": { 00:04:51.385 "read": true, 00:04:51.385 "write": true, 00:04:51.385 "unmap": true, 00:04:51.385 "write_zeroes": true, 00:04:51.385 "flush": true, 00:04:51.385 "reset": true, 00:04:51.385 "compare": false, 00:04:51.385 "compare_and_write": false, 00:04:51.385 "abort": true, 00:04:51.385 "nvme_admin": false, 00:04:51.385 "nvme_io": false 00:04:51.385 }, 00:04:51.385 "memory_domains": [ 00:04:51.385 { 00:04:51.385 "dma_device_id": "system", 00:04:51.385 "dma_device_type": 1 00:04:51.385 }, 00:04:51.385 { 00:04:51.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.385 "dma_device_type": 2 00:04:51.385 } 00:04:51.385 ], 00:04:51.385 "driver_specific": {} 00:04:51.385 } 00:04:51.385 ]' 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.385 [2024-06-09 08:43:13.838684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:51.385 [2024-06-09 08:43:13.838715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.385 [2024-06-09 08:43:13.838729] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeb4630 00:04:51.385 [2024-06-09 08:43:13.838735] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.385 [2024-06-09 08:43:13.840034] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.385 [2024-06-09 08:43:13.840054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.385 Passthru0 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.385 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.385 { 00:04:51.385 "name": "Malloc0", 00:04:51.385 "aliases": [ 00:04:51.385 "76a2d9f4-9a60-45e2-bcce-912c3c1bb3c4" 00:04:51.385 ], 00:04:51.385 "product_name": "Malloc disk", 00:04:51.385 "block_size": 512, 00:04:51.385 "num_blocks": 16384, 00:04:51.385 "uuid": "76a2d9f4-9a60-45e2-bcce-912c3c1bb3c4", 00:04:51.385 "assigned_rate_limits": { 00:04:51.385 "rw_ios_per_sec": 0, 00:04:51.385 "rw_mbytes_per_sec": 0, 00:04:51.385 "r_mbytes_per_sec": 0, 00:04:51.385 "w_mbytes_per_sec": 0 00:04:51.385 }, 00:04:51.385 "claimed": true, 00:04:51.385 "claim_type": "exclusive_write", 00:04:51.385 "zoned": false, 00:04:51.385 "supported_io_types": { 00:04:51.385 "read": true, 00:04:51.385 "write": true, 00:04:51.385 "unmap": true, 00:04:51.385 "write_zeroes": true, 00:04:51.385 "flush": true, 00:04:51.385 "reset": true, 00:04:51.385 "compare": false, 00:04:51.385 "compare_and_write": false, 00:04:51.385 "abort": true, 00:04:51.385 "nvme_admin": false, 00:04:51.385 "nvme_io": false 00:04:51.385 }, 00:04:51.385 "memory_domains": [ 00:04:51.385 { 00:04:51.385 "dma_device_id": "system", 00:04:51.385 "dma_device_type": 1 00:04:51.385 }, 00:04:51.385 { 00:04:51.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.385 "dma_device_type": 2 00:04:51.385 } 00:04:51.385 ], 00:04:51.385 "driver_specific": {} 00:04:51.385 }, 00:04:51.385 { 00:04:51.385 "name": "Passthru0", 00:04:51.385 "aliases": [ 00:04:51.385 "dca683d5-4496-5d91-abc9-bbfe0fb800be" 00:04:51.385 ], 00:04:51.385 "product_name": "passthru", 00:04:51.385 "block_size": 512, 00:04:51.385 "num_blocks": 16384, 00:04:51.385 "uuid": "dca683d5-4496-5d91-abc9-bbfe0fb800be", 00:04:51.385 "assigned_rate_limits": { 00:04:51.385 "rw_ios_per_sec": 0, 00:04:51.385 "rw_mbytes_per_sec": 0, 00:04:51.385 "r_mbytes_per_sec": 0, 00:04:51.385 "w_mbytes_per_sec": 0 00:04:51.385 }, 00:04:51.385 "claimed": false, 00:04:51.385 "zoned": false, 00:04:51.385 "supported_io_types": { 00:04:51.385 "read": true, 00:04:51.385 "write": true, 00:04:51.385 "unmap": true, 00:04:51.385 "write_zeroes": true, 00:04:51.385 "flush": true, 00:04:51.385 "reset": true, 00:04:51.385 "compare": false, 00:04:51.385 "compare_and_write": false, 00:04:51.385 "abort": true, 00:04:51.385 "nvme_admin": false, 00:04:51.385 "nvme_io": false 00:04:51.385 }, 00:04:51.385 "memory_domains": [ 00:04:51.385 { 00:04:51.385 "dma_device_id": "system", 00:04:51.385 "dma_device_type": 1 00:04:51.385 }, 00:04:51.385 { 00:04:51.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.385 "dma_device_type": 2 00:04:51.385 } 00:04:51.385 ], 00:04:51.385 "driver_specific": { 00:04:51.385 "passthru": { 00:04:51.385 "name": "Passthru0", 00:04:51.385 "base_bdev_name": "Malloc0" 00:04:51.385 } 00:04:51.385 } 00:04:51.385 } 00:04:51.385 ]' 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.385 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.386 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.386 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.386 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.386 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.386 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.646 08:43:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.646 00:04:51.646 real 0m0.287s 00:04:51.646 user 0m0.182s 00:04:51.646 sys 0m0.035s 00:04:51.646 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.646 08:43:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.646 ************************************ 00:04:51.646 END TEST rpc_integrity 00:04:51.646 ************************************ 00:04:51.646 08:43:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:51.646 08:43:14 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:51.646 08:43:14 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:51.646 08:43:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.646 ************************************ 00:04:51.646 START TEST rpc_plugins 00:04:51.646 ************************************ 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:04:51.646 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.646 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:51.646 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.646 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.646 { 00:04:51.646 "name": "Malloc1", 00:04:51.646 "aliases": [ 00:04:51.646 "01ac25a1-02bc-47b3-87b5-8d8731a98bb5" 00:04:51.646 ], 00:04:51.646 "product_name": "Malloc disk", 00:04:51.646 "block_size": 4096, 00:04:51.646 "num_blocks": 256, 00:04:51.646 "uuid": "01ac25a1-02bc-47b3-87b5-8d8731a98bb5", 00:04:51.646 "assigned_rate_limits": { 00:04:51.646 "rw_ios_per_sec": 0, 00:04:51.646 "rw_mbytes_per_sec": 0, 00:04:51.646 "r_mbytes_per_sec": 0, 00:04:51.646 "w_mbytes_per_sec": 0 00:04:51.646 }, 00:04:51.646 "claimed": false, 00:04:51.646 "zoned": false, 00:04:51.646 "supported_io_types": { 00:04:51.646 "read": true, 00:04:51.646 "write": true, 00:04:51.646 "unmap": true, 00:04:51.646 "write_zeroes": true, 00:04:51.646 "flush": true, 00:04:51.646 "reset": true, 00:04:51.646 "compare": false, 00:04:51.646 "compare_and_write": false, 00:04:51.646 "abort": true, 00:04:51.646 "nvme_admin": false, 00:04:51.646 "nvme_io": false 00:04:51.646 }, 00:04:51.646 "memory_domains": [ 00:04:51.646 { 00:04:51.646 "dma_device_id": "system", 00:04:51.646 "dma_device_type": 1 00:04:51.646 }, 00:04:51.646 { 00:04:51.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.646 "dma_device_type": 2 00:04:51.646 } 00:04:51.646 ], 00:04:51.646 "driver_specific": {} 00:04:51.646 } 00:04:51.646 ]' 00:04:51.646 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:51.646 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.646 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.646 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.646 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.647 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.647 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.647 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.647 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:51.908 08:43:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.908 00:04:51.908 real 0m0.150s 00:04:51.908 user 0m0.094s 00:04:51.908 sys 0m0.019s 00:04:51.908 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.908 08:43:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.908 ************************************ 00:04:51.908 END TEST rpc_plugins 00:04:51.908 ************************************ 00:04:51.908 08:43:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.908 08:43:14 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:51.908 08:43:14 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:51.908 08:43:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.908 ************************************ 00:04:51.908 START TEST rpc_trace_cmd_test 00:04:51.908 ************************************ 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:51.908 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2352314", 00:04:51.908 "tpoint_group_mask": "0x8", 00:04:51.908 "iscsi_conn": { 00:04:51.908 "mask": "0x2", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "scsi": { 00:04:51.908 "mask": "0x4", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "bdev": { 00:04:51.908 "mask": "0x8", 00:04:51.908 "tpoint_mask": "0xffffffffffffffff" 00:04:51.908 }, 00:04:51.908 "nvmf_rdma": { 00:04:51.908 "mask": "0x10", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "nvmf_tcp": { 00:04:51.908 "mask": "0x20", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "ftl": { 00:04:51.908 "mask": "0x40", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "blobfs": { 00:04:51.908 "mask": "0x80", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "dsa": { 00:04:51.908 "mask": "0x200", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "thread": { 00:04:51.908 "mask": "0x400", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "nvme_pcie": { 00:04:51.908 "mask": "0x800", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "iaa": { 00:04:51.908 "mask": "0x1000", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "nvme_tcp": { 00:04:51.908 "mask": "0x2000", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "bdev_nvme": { 00:04:51.908 "mask": "0x4000", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 }, 00:04:51.908 "sock": { 00:04:51.908 "mask": "0x8000", 00:04:51.908 "tpoint_mask": "0x0" 00:04:51.908 } 00:04:51.908 }' 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.908 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:52.169 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:52.169 08:43:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:52.169 00:04:52.169 real 0m0.225s 00:04:52.169 user 0m0.185s 00:04:52.169 sys 0m0.030s 00:04:52.169 08:43:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.169 08:43:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.169 ************************************ 00:04:52.169 END TEST rpc_trace_cmd_test 00:04:52.169 ************************************ 00:04:52.169 08:43:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:52.169 08:43:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:52.169 08:43:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:52.169 08:43:14 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.169 08:43:14 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.169 08:43:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.169 ************************************ 00:04:52.169 START TEST rpc_daemon_integrity 00:04:52.169 ************************************ 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.169 { 00:04:52.169 "name": "Malloc2", 00:04:52.169 "aliases": [ 00:04:52.169 "b719a139-74fc-4a5c-83be-923331a0e641" 00:04:52.169 ], 00:04:52.169 "product_name": "Malloc disk", 00:04:52.169 "block_size": 512, 00:04:52.169 "num_blocks": 16384, 00:04:52.169 "uuid": "b719a139-74fc-4a5c-83be-923331a0e641", 00:04:52.169 "assigned_rate_limits": { 00:04:52.169 "rw_ios_per_sec": 0, 00:04:52.169 "rw_mbytes_per_sec": 0, 00:04:52.169 "r_mbytes_per_sec": 0, 00:04:52.169 "w_mbytes_per_sec": 0 00:04:52.169 }, 00:04:52.169 "claimed": false, 00:04:52.169 "zoned": false, 00:04:52.169 "supported_io_types": { 00:04:52.169 "read": true, 00:04:52.169 "write": true, 00:04:52.169 "unmap": true, 00:04:52.169 "write_zeroes": true, 00:04:52.169 "flush": true, 00:04:52.169 "reset": true, 00:04:52.169 "compare": false, 00:04:52.169 "compare_and_write": false, 00:04:52.169 "abort": true, 00:04:52.169 "nvme_admin": false, 00:04:52.169 "nvme_io": false 00:04:52.169 }, 00:04:52.169 "memory_domains": [ 00:04:52.169 { 00:04:52.169 "dma_device_id": "system", 00:04:52.169 "dma_device_type": 1 00:04:52.169 }, 00:04:52.169 { 00:04:52.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.169 "dma_device_type": 2 00:04:52.169 } 00:04:52.169 ], 00:04:52.169 "driver_specific": {} 00:04:52.169 } 00:04:52.169 ]' 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.169 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.170 [2024-06-09 08:43:14.725120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:52.170 [2024-06-09 08:43:14.725151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.170 [2024-06-09 08:43:14.725164] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeb5e40 00:04:52.170 [2024-06-09 08:43:14.725171] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.170 [2024-06-09 08:43:14.726385] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.170 [2024-06-09 08:43:14.726411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.431 Passthru0 00:04:52.431 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.431 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.431 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.431 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.431 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.431 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.431 { 00:04:52.431 "name": "Malloc2", 00:04:52.431 "aliases": [ 00:04:52.431 "b719a139-74fc-4a5c-83be-923331a0e641" 00:04:52.431 ], 00:04:52.431 "product_name": "Malloc disk", 00:04:52.431 "block_size": 512, 00:04:52.431 "num_blocks": 16384, 00:04:52.431 "uuid": "b719a139-74fc-4a5c-83be-923331a0e641", 00:04:52.431 "assigned_rate_limits": { 00:04:52.431 "rw_ios_per_sec": 0, 00:04:52.431 "rw_mbytes_per_sec": 0, 00:04:52.431 "r_mbytes_per_sec": 0, 00:04:52.431 "w_mbytes_per_sec": 0 00:04:52.431 }, 00:04:52.431 "claimed": true, 00:04:52.431 "claim_type": "exclusive_write", 00:04:52.431 "zoned": false, 00:04:52.431 "supported_io_types": { 00:04:52.431 "read": true, 00:04:52.431 "write": true, 00:04:52.431 "unmap": true, 00:04:52.431 "write_zeroes": true, 00:04:52.431 "flush": true, 00:04:52.431 "reset": true, 00:04:52.431 "compare": false, 00:04:52.431 "compare_and_write": false, 00:04:52.431 "abort": true, 00:04:52.431 "nvme_admin": false, 00:04:52.432 "nvme_io": false 00:04:52.432 }, 00:04:52.432 "memory_domains": [ 00:04:52.432 { 00:04:52.432 "dma_device_id": "system", 00:04:52.432 "dma_device_type": 1 00:04:52.432 }, 00:04:52.432 { 00:04:52.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.432 "dma_device_type": 2 00:04:52.432 } 00:04:52.432 ], 00:04:52.432 "driver_specific": {} 00:04:52.432 }, 00:04:52.432 { 00:04:52.432 "name": "Passthru0", 00:04:52.432 "aliases": [ 00:04:52.432 "7d3984cd-3eac-5919-acaa-6b48331e7ad0" 00:04:52.432 ], 00:04:52.432 "product_name": "passthru", 00:04:52.432 "block_size": 512, 00:04:52.432 "num_blocks": 16384, 00:04:52.432 "uuid": "7d3984cd-3eac-5919-acaa-6b48331e7ad0", 00:04:52.432 "assigned_rate_limits": { 00:04:52.432 "rw_ios_per_sec": 0, 00:04:52.432 "rw_mbytes_per_sec": 0, 00:04:52.432 "r_mbytes_per_sec": 0, 00:04:52.432 "w_mbytes_per_sec": 0 00:04:52.432 }, 00:04:52.432 "claimed": false, 00:04:52.432 "zoned": false, 00:04:52.432 "supported_io_types": { 00:04:52.432 "read": true, 00:04:52.432 "write": true, 00:04:52.432 "unmap": true, 00:04:52.432 "write_zeroes": true, 00:04:52.432 "flush": true, 00:04:52.432 "reset": true, 00:04:52.432 "compare": false, 00:04:52.432 "compare_and_write": false, 00:04:52.432 "abort": true, 00:04:52.432 "nvme_admin": false, 00:04:52.432 "nvme_io": false 00:04:52.432 }, 00:04:52.432 "memory_domains": [ 00:04:52.432 { 00:04:52.432 "dma_device_id": "system", 00:04:52.432 "dma_device_type": 1 00:04:52.432 }, 00:04:52.432 { 00:04:52.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.432 "dma_device_type": 2 00:04:52.432 } 00:04:52.432 ], 00:04:52.432 "driver_specific": { 00:04:52.432 "passthru": { 00:04:52.432 "name": "Passthru0", 00:04:52.432 "base_bdev_name": "Malloc2" 00:04:52.432 } 00:04:52.432 } 00:04:52.432 } 00:04:52.432 ]' 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.432 00:04:52.432 real 0m0.285s 00:04:52.432 user 0m0.183s 00:04:52.432 sys 0m0.038s 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.432 08:43:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.432 ************************************ 00:04:52.432 END TEST rpc_daemon_integrity 00:04:52.432 ************************************ 00:04:52.432 08:43:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:52.432 08:43:14 rpc -- rpc/rpc.sh@84 -- # killprocess 2352314 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@949 -- # '[' -z 2352314 ']' 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@953 -- # kill -0 2352314 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@954 -- # uname 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2352314 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2352314' 00:04:52.432 killing process with pid 2352314 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@968 -- # kill 2352314 00:04:52.432 08:43:14 rpc -- common/autotest_common.sh@973 -- # wait 2352314 00:04:52.692 00:04:52.692 real 0m2.446s 00:04:52.692 user 0m3.209s 00:04:52.692 sys 0m0.683s 00:04:52.692 08:43:15 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.692 08:43:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.692 ************************************ 00:04:52.692 END TEST rpc 00:04:52.692 ************************************ 00:04:52.692 08:43:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.692 08:43:15 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.692 08:43:15 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.692 08:43:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.692 ************************************ 00:04:52.692 START TEST skip_rpc 00:04:52.692 ************************************ 00:04:52.692 08:43:15 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.952 * Looking for test storage... 00:04:52.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.952 08:43:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.952 08:43:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.952 08:43:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:52.952 08:43:15 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.952 08:43:15 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.952 08:43:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.952 ************************************ 00:04:52.952 START TEST skip_rpc 00:04:52.952 ************************************ 00:04:52.952 08:43:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:04:52.952 08:43:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2352911 00:04:52.952 08:43:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.952 08:43:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:52.952 08:43:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:52.952 [2024-06-09 08:43:15.430106] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:52.952 [2024-06-09 08:43:15.430154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352911 ] 00:04:52.952 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.952 [2024-06-09 08:43:15.489172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.212 [2024-06-09 08:43:15.554028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2352911 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 2352911 ']' 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 2352911 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2352911 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2352911' 00:04:58.501 killing process with pid 2352911 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 2352911 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 2352911 00:04:58.501 00:04:58.501 real 0m5.277s 00:04:58.501 user 0m5.090s 00:04:58.501 sys 0m0.224s 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:58.501 08:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 ************************************ 00:04:58.501 END TEST skip_rpc 00:04:58.501 ************************************ 00:04:58.501 08:43:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:58.501 08:43:20 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:58.501 08:43:20 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:58.501 08:43:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 ************************************ 00:04:58.501 START TEST skip_rpc_with_json 00:04:58.501 ************************************ 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2353966 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2353966 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 2353966 ']' 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:58.501 08:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 [2024-06-09 08:43:20.784442] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:58.501 [2024-06-09 08:43:20.784494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353966 ] 00:04:58.501 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.501 [2024-06-09 08:43:20.845819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.501 [2024-06-09 08:43:20.916904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.072 [2024-06-09 08:43:21.552329] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:59.072 request: 00:04:59.072 { 00:04:59.072 "trtype": "tcp", 00:04:59.072 "method": "nvmf_get_transports", 00:04:59.072 "req_id": 1 00:04:59.072 } 00:04:59.072 Got JSON-RPC error response 00:04:59.072 response: 00:04:59.072 { 00:04:59.072 "code": -19, 00:04:59.072 "message": "No such device" 00:04:59.072 } 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.072 [2024-06-09 08:43:21.564445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:59.072 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.333 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:59.333 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.333 { 00:04:59.333 "subsystems": [ 00:04:59.333 { 00:04:59.333 "subsystem": "keyring", 00:04:59.333 "config": [] 00:04:59.333 }, 00:04:59.334 { 00:04:59.334 "subsystem": "iobuf", 00:04:59.334 "config": [ 00:04:59.334 { 00:04:59.334 "method": "iobuf_set_options", 00:04:59.334 "params": { 00:04:59.334 "small_pool_count": 8192, 00:04:59.334 "large_pool_count": 1024, 00:04:59.334 "small_bufsize": 8192, 00:04:59.334 "large_bufsize": 135168 00:04:59.334 } 00:04:59.334 } 00:04:59.334 ] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "sock", 00:04:59.334 "config": [ 00:04:59.334 { 00:04:59.334 "method": "sock_set_default_impl", 00:04:59.334 "params": { 00:04:59.334 "impl_name": "posix" 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "sock_impl_set_options", 00:04:59.334 "params": { 00:04:59.334 "impl_name": "ssl", 00:04:59.334 "recv_buf_size": 4096, 00:04:59.334 "send_buf_size": 4096, 00:04:59.334 "enable_recv_pipe": true, 00:04:59.334 "enable_quickack": false, 00:04:59.334 "enable_placement_id": 0, 00:04:59.334 "enable_zerocopy_send_server": true, 00:04:59.334 "enable_zerocopy_send_client": false, 00:04:59.334 "zerocopy_threshold": 0, 00:04:59.334 "tls_version": 0, 00:04:59.334 "enable_ktls": false 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "sock_impl_set_options", 00:04:59.334 "params": { 00:04:59.334 "impl_name": "posix", 00:04:59.334 "recv_buf_size": 2097152, 00:04:59.334 "send_buf_size": 2097152, 00:04:59.334 "enable_recv_pipe": true, 00:04:59.334 "enable_quickack": false, 00:04:59.334 "enable_placement_id": 0, 00:04:59.334 "enable_zerocopy_send_server": true, 00:04:59.334 "enable_zerocopy_send_client": false, 00:04:59.334 "zerocopy_threshold": 0, 00:04:59.334 "tls_version": 0, 00:04:59.334 "enable_ktls": false 00:04:59.334 } 00:04:59.334 } 00:04:59.334 ] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "vmd", 00:04:59.334 "config": [] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "accel", 00:04:59.334 "config": [ 00:04:59.334 { 00:04:59.334 "method": "accel_set_options", 00:04:59.334 "params": { 00:04:59.334 "small_cache_size": 128, 00:04:59.334 "large_cache_size": 16, 00:04:59.334 "task_count": 2048, 00:04:59.334 "sequence_count": 2048, 00:04:59.334 "buf_count": 2048 00:04:59.334 } 00:04:59.334 } 00:04:59.334 ] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "bdev", 00:04:59.334 "config": [ 00:04:59.334 { 00:04:59.334 "method": "bdev_set_options", 00:04:59.334 "params": { 00:04:59.334 "bdev_io_pool_size": 65535, 00:04:59.334 "bdev_io_cache_size": 256, 00:04:59.334 "bdev_auto_examine": true, 00:04:59.334 "iobuf_small_cache_size": 128, 00:04:59.334 "iobuf_large_cache_size": 16 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "bdev_raid_set_options", 00:04:59.334 "params": { 00:04:59.334 "process_window_size_kb": 1024 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "bdev_iscsi_set_options", 00:04:59.334 "params": { 00:04:59.334 "timeout_sec": 30 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "bdev_nvme_set_options", 00:04:59.334 "params": { 00:04:59.334 "action_on_timeout": "none", 00:04:59.334 "timeout_us": 0, 00:04:59.334 "timeout_admin_us": 0, 00:04:59.334 "keep_alive_timeout_ms": 10000, 00:04:59.334 "arbitration_burst": 0, 00:04:59.334 "low_priority_weight": 0, 00:04:59.334 "medium_priority_weight": 0, 00:04:59.334 "high_priority_weight": 0, 00:04:59.334 "nvme_adminq_poll_period_us": 10000, 00:04:59.334 "nvme_ioq_poll_period_us": 0, 00:04:59.334 "io_queue_requests": 0, 00:04:59.334 "delay_cmd_submit": true, 00:04:59.334 "transport_retry_count": 4, 00:04:59.334 "bdev_retry_count": 3, 00:04:59.334 "transport_ack_timeout": 0, 00:04:59.334 "ctrlr_loss_timeout_sec": 0, 00:04:59.334 "reconnect_delay_sec": 0, 00:04:59.334 "fast_io_fail_timeout_sec": 0, 00:04:59.334 "disable_auto_failback": false, 00:04:59.334 "generate_uuids": false, 00:04:59.334 "transport_tos": 0, 00:04:59.334 "nvme_error_stat": false, 00:04:59.334 "rdma_srq_size": 0, 00:04:59.334 "io_path_stat": false, 00:04:59.334 "allow_accel_sequence": false, 00:04:59.334 "rdma_max_cq_size": 0, 00:04:59.334 "rdma_cm_event_timeout_ms": 0, 00:04:59.334 "dhchap_digests": [ 00:04:59.334 "sha256", 00:04:59.334 "sha384", 00:04:59.334 "sha512" 00:04:59.334 ], 00:04:59.334 "dhchap_dhgroups": [ 00:04:59.334 "null", 00:04:59.334 "ffdhe2048", 00:04:59.334 "ffdhe3072", 00:04:59.334 "ffdhe4096", 00:04:59.334 "ffdhe6144", 00:04:59.334 "ffdhe8192" 00:04:59.334 ] 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "bdev_nvme_set_hotplug", 00:04:59.334 "params": { 00:04:59.334 "period_us": 100000, 00:04:59.334 "enable": false 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "bdev_wait_for_examine" 00:04:59.334 } 00:04:59.334 ] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "scsi", 00:04:59.334 "config": null 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "scheduler", 00:04:59.334 "config": [ 00:04:59.334 { 00:04:59.334 "method": "framework_set_scheduler", 00:04:59.334 "params": { 00:04:59.334 "name": "static" 00:04:59.334 } 00:04:59.334 } 00:04:59.334 ] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "vhost_scsi", 00:04:59.334 "config": [] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "vhost_blk", 00:04:59.334 "config": [] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "ublk", 00:04:59.334 "config": [] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "nbd", 00:04:59.334 "config": [] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "nvmf", 00:04:59.334 "config": [ 00:04:59.334 { 00:04:59.334 "method": "nvmf_set_config", 00:04:59.334 "params": { 00:04:59.334 "discovery_filter": "match_any", 00:04:59.334 "admin_cmd_passthru": { 00:04:59.334 "identify_ctrlr": false 00:04:59.334 } 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "nvmf_set_max_subsystems", 00:04:59.334 "params": { 00:04:59.334 "max_subsystems": 1024 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "nvmf_set_crdt", 00:04:59.334 "params": { 00:04:59.334 "crdt1": 0, 00:04:59.334 "crdt2": 0, 00:04:59.334 "crdt3": 0 00:04:59.334 } 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "method": "nvmf_create_transport", 00:04:59.334 "params": { 00:04:59.334 "trtype": "TCP", 00:04:59.334 "max_queue_depth": 128, 00:04:59.334 "max_io_qpairs_per_ctrlr": 127, 00:04:59.334 "in_capsule_data_size": 4096, 00:04:59.334 "max_io_size": 131072, 00:04:59.334 "io_unit_size": 131072, 00:04:59.334 "max_aq_depth": 128, 00:04:59.334 "num_shared_buffers": 511, 00:04:59.334 "buf_cache_size": 4294967295, 00:04:59.334 "dif_insert_or_strip": false, 00:04:59.334 "zcopy": false, 00:04:59.334 "c2h_success": true, 00:04:59.334 "sock_priority": 0, 00:04:59.334 "abort_timeout_sec": 1, 00:04:59.334 "ack_timeout": 0, 00:04:59.334 "data_wr_pool_size": 0 00:04:59.334 } 00:04:59.334 } 00:04:59.334 ] 00:04:59.334 }, 00:04:59.334 { 00:04:59.334 "subsystem": "iscsi", 00:04:59.334 "config": [ 00:04:59.334 { 00:04:59.334 "method": "iscsi_set_options", 00:04:59.334 "params": { 00:04:59.334 "node_base": "iqn.2016-06.io.spdk", 00:04:59.334 "max_sessions": 128, 00:04:59.334 "max_connections_per_session": 2, 00:04:59.334 "max_queue_depth": 64, 00:04:59.334 "default_time2wait": 2, 00:04:59.334 "default_time2retain": 20, 00:04:59.334 "first_burst_length": 8192, 00:04:59.334 "immediate_data": true, 00:04:59.334 "allow_duplicated_isid": false, 00:04:59.334 "error_recovery_level": 0, 00:04:59.334 "nop_timeout": 60, 00:04:59.334 "nop_in_interval": 30, 00:04:59.334 "disable_chap": false, 00:04:59.334 "require_chap": false, 00:04:59.334 "mutual_chap": false, 00:04:59.334 "chap_group": 0, 00:04:59.334 "max_large_datain_per_connection": 64, 00:04:59.334 "max_r2t_per_connection": 4, 00:04:59.334 "pdu_pool_size": 36864, 00:04:59.334 "immediate_data_pool_size": 16384, 00:04:59.334 "data_out_pool_size": 2048 00:04:59.334 } 00:04:59.334 } 00:04:59.334 ] 00:04:59.334 } 00:04:59.334 ] 00:04:59.334 } 00:04:59.334 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:59.334 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2353966 00:04:59.334 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2353966 ']' 00:04:59.334 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2353966 00:04:59.334 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:59.334 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:59.334 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2353966 00:04:59.335 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:59.335 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:59.335 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2353966' 00:04:59.335 killing process with pid 2353966 00:04:59.335 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2353966 00:04:59.335 08:43:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2353966 00:04:59.595 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2354295 00:04:59.595 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.595 08:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.886 08:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2354295 00:05:04.886 08:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2354295 ']' 00:05:04.886 08:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2354295 00:05:04.886 08:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:04.886 08:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:04.886 08:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2354295 00:05:04.886 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:04.886 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:04.886 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2354295' 00:05:04.886 killing process with pid 2354295 00:05:04.886 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2354295 00:05:04.886 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2354295 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.887 00:05:04.887 real 0m6.532s 00:05:04.887 user 0m6.405s 00:05:04.887 sys 0m0.532s 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.887 ************************************ 00:05:04.887 END TEST skip_rpc_with_json 00:05:04.887 ************************************ 00:05:04.887 08:43:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.887 08:43:27 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:04.887 08:43:27 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:04.887 08:43:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.887 ************************************ 00:05:04.887 START TEST skip_rpc_with_delay 00:05:04.887 ************************************ 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.887 [2024-06-09 08:43:27.394208] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:04.887 [2024-06-09 08:43:27.394281] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:04.887 00:05:04.887 real 0m0.068s 00:05:04.887 user 0m0.043s 00:05:04.887 sys 0m0.025s 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:04.887 08:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:04.887 ************************************ 00:05:04.887 END TEST skip_rpc_with_delay 00:05:04.887 ************************************ 00:05:04.887 08:43:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.148 08:43:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.148 08:43:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.148 08:43:27 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:05.148 08:43:27 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:05.148 08:43:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.148 ************************************ 00:05:05.148 START TEST exit_on_failed_rpc_init 00:05:05.148 ************************************ 00:05:05.148 08:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:05.148 08:43:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2355378 00:05:05.148 08:43:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2355378 00:05:05.148 08:43:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.148 08:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 2355378 ']' 00:05:05.148 08:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.149 08:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:05.149 08:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.149 08:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:05.149 08:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.149 [2024-06-09 08:43:27.549849] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:05.149 [2024-06-09 08:43:27.549900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355378 ] 00:05:05.149 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.149 [2024-06-09 08:43:27.609887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.149 [2024-06-09 08:43:27.678684] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.131 [2024-06-09 08:43:28.351357] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:06.131 [2024-06-09 08:43:28.351415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355686 ] 00:05:06.131 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.131 [2024-06-09 08:43:28.428322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.131 [2024-06-09 08:43:28.492002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.131 [2024-06-09 08:43:28.492062] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.131 [2024-06-09 08:43:28.492071] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.131 [2024-06-09 08:43:28.492078] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:06.131 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2355378 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 2355378 ']' 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 2355378 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2355378 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2355378' 00:05:06.132 killing process with pid 2355378 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 2355378 00:05:06.132 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 2355378 00:05:06.393 00:05:06.393 real 0m1.325s 00:05:06.393 user 0m1.548s 00:05:06.393 sys 0m0.360s 00:05:06.393 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.393 08:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.393 ************************************ 00:05:06.393 END TEST exit_on_failed_rpc_init 00:05:06.393 ************************************ 00:05:06.393 08:43:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.393 00:05:06.393 real 0m13.608s 00:05:06.393 user 0m13.237s 00:05:06.393 sys 0m1.415s 00:05:06.393 08:43:28 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.393 08:43:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.393 ************************************ 00:05:06.393 END TEST skip_rpc 00:05:06.393 ************************************ 00:05:06.393 08:43:28 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.393 08:43:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:06.393 08:43:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:06.393 08:43:28 -- common/autotest_common.sh@10 -- # set +x 00:05:06.393 ************************************ 00:05:06.393 START TEST rpc_client 00:05:06.393 ************************************ 00:05:06.393 08:43:28 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.654 * Looking for test storage... 00:05:06.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:06.654 08:43:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:06.654 OK 00:05:06.654 08:43:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.654 00:05:06.654 real 0m0.128s 00:05:06.654 user 0m0.060s 00:05:06.654 sys 0m0.077s 00:05:06.654 08:43:29 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.654 08:43:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:06.654 ************************************ 00:05:06.654 END TEST rpc_client 00:05:06.654 ************************************ 00:05:06.654 08:43:29 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.654 08:43:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:06.654 08:43:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:06.654 08:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:06.654 ************************************ 00:05:06.654 START TEST json_config 00:05:06.654 ************************************ 00:05:06.654 08:43:29 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.654 08:43:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.654 08:43:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.916 08:43:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.916 08:43:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.916 08:43:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.916 08:43:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.916 08:43:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.916 08:43:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.916 08:43:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.916 08:43:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@47 -- # : 0 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:06.916 08:43:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:06.916 08:43:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:06.916 08:43:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.916 08:43:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.916 08:43:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.916 08:43:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.916 08:43:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:06.916 08:43:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:06.917 INFO: JSON configuration test init 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.917 08:43:29 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:06.917 08:43:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.917 08:43:29 json_config -- json_config/common.sh@10 -- # shift 00:05:06.917 08:43:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.917 08:43:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.917 08:43:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.917 08:43:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.917 08:43:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.917 08:43:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2355867 00:05:06.917 08:43:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.917 Waiting for target to run... 00:05:06.917 08:43:29 json_config -- json_config/common.sh@25 -- # waitforlisten 2355867 /var/tmp/spdk_tgt.sock 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@830 -- # '[' -z 2355867 ']' 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:06.917 08:43:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:06.917 08:43:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.917 [2024-06-09 08:43:29.313313] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:06.917 [2024-06-09 08:43:29.313383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355867 ] 00:05:06.917 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.178 [2024-06-09 08:43:29.568349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.178 [2024-06-09 08:43:29.618809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.750 08:43:30 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:07.750 08:43:30 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:07.750 08:43:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.750 00:05:07.750 08:43:30 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:07.750 08:43:30 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:07.750 08:43:30 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:07.750 08:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.750 08:43:30 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:07.750 08:43:30 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:07.750 08:43:30 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:07.750 08:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.750 08:43:30 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:07.750 08:43:30 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:07.750 08:43:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:08.328 08:43:30 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:08.328 08:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:08.328 08:43:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:08.328 08:43:30 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:08.328 08:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:08.328 08:43:30 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:08.328 08:43:30 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:08.329 08:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.329 08:43:30 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:08.329 08:43:30 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:08.329 08:43:30 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:08.329 08:43:30 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.329 08:43:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.599 MallocForNvmf0 00:05:08.599 08:43:31 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.599 08:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.599 MallocForNvmf1 00:05:08.599 08:43:31 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.599 08:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.860 [2024-06-09 08:43:31.275517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.860 08:43:31 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.860 08:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.120 08:43:31 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.120 08:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.120 08:43:31 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.120 08:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.381 08:43:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.381 08:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.381 [2024-06-09 08:43:31.825356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.381 08:43:31 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:09.381 08:43:31 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:09.381 08:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.381 08:43:31 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:09.381 08:43:31 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:09.381 08:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.381 08:43:31 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:09.381 08:43:31 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.381 08:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.642 MallocBdevForConfigChangeCheck 00:05:09.642 08:43:32 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:09.642 08:43:32 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:09.642 08:43:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.642 08:43:32 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:09.642 08:43:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.902 08:43:32 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:09.902 INFO: shutting down applications... 00:05:09.902 08:43:32 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:09.902 08:43:32 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:09.902 08:43:32 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:09.902 08:43:32 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:10.474 Calling clear_iscsi_subsystem 00:05:10.474 Calling clear_nvmf_subsystem 00:05:10.474 Calling clear_nbd_subsystem 00:05:10.474 Calling clear_ublk_subsystem 00:05:10.474 Calling clear_vhost_blk_subsystem 00:05:10.474 Calling clear_vhost_scsi_subsystem 00:05:10.474 Calling clear_bdev_subsystem 00:05:10.474 08:43:32 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:10.474 08:43:32 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:10.474 08:43:32 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:10.475 08:43:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.475 08:43:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:10.475 08:43:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:10.735 08:43:33 json_config -- json_config/json_config.sh@345 -- # break 00:05:10.735 08:43:33 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:10.735 08:43:33 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:10.735 08:43:33 json_config -- json_config/common.sh@31 -- # local app=target 00:05:10.735 08:43:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.735 08:43:33 json_config -- json_config/common.sh@35 -- # [[ -n 2355867 ]] 00:05:10.735 08:43:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2355867 00:05:10.735 08:43:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.735 08:43:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.735 08:43:33 json_config -- json_config/common.sh@41 -- # kill -0 2355867 00:05:10.735 08:43:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.306 08:43:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.306 08:43:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.306 08:43:33 json_config -- json_config/common.sh@41 -- # kill -0 2355867 00:05:11.306 08:43:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.306 08:43:33 json_config -- json_config/common.sh@43 -- # break 00:05:11.306 08:43:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.306 08:43:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.306 SPDK target shutdown done 00:05:11.306 08:43:33 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:11.306 INFO: relaunching applications... 00:05:11.306 08:43:33 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.306 08:43:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.306 08:43:33 json_config -- json_config/common.sh@10 -- # shift 00:05:11.306 08:43:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.306 08:43:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.306 08:43:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.306 08:43:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.306 08:43:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.306 08:43:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2356937 00:05:11.306 08:43:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.306 Waiting for target to run... 00:05:11.306 08:43:33 json_config -- json_config/common.sh@25 -- # waitforlisten 2356937 /var/tmp/spdk_tgt.sock 00:05:11.307 08:43:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.307 08:43:33 json_config -- common/autotest_common.sh@830 -- # '[' -z 2356937 ']' 00:05:11.307 08:43:33 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.307 08:43:33 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:11.307 08:43:33 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.307 08:43:33 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:11.307 08:43:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.307 [2024-06-09 08:43:33.657857] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:11.307 [2024-06-09 08:43:33.657915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356937 ] 00:05:11.307 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.567 [2024-06-09 08:43:33.876341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.567 [2024-06-09 08:43:33.926220] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.139 [2024-06-09 08:43:34.417377] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.139 [2024-06-09 08:43:34.449750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:12.139 08:43:34 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:12.139 08:43:34 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:12.139 08:43:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:12.139 00:05:12.139 08:43:34 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:12.139 08:43:34 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:12.139 INFO: Checking if target configuration is the same... 00:05:12.139 08:43:34 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.139 08:43:34 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:12.139 08:43:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.139 + '[' 2 -ne 2 ']' 00:05:12.139 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.139 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.139 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.139 +++ basename /dev/fd/62 00:05:12.139 ++ mktemp /tmp/62.XXX 00:05:12.139 + tmp_file_1=/tmp/62.oQX 00:05:12.139 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.139 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.139 + tmp_file_2=/tmp/spdk_tgt_config.json.V7w 00:05:12.139 + ret=0 00:05:12.140 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.401 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.401 + diff -u /tmp/62.oQX /tmp/spdk_tgt_config.json.V7w 00:05:12.401 + echo 'INFO: JSON config files are the same' 00:05:12.401 INFO: JSON config files are the same 00:05:12.401 + rm /tmp/62.oQX /tmp/spdk_tgt_config.json.V7w 00:05:12.401 + exit 0 00:05:12.401 08:43:34 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:12.401 08:43:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:12.401 INFO: changing configuration and checking if this can be detected... 00:05:12.401 08:43:34 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.401 08:43:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.674 08:43:35 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:12.674 08:43:35 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.674 08:43:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.674 + '[' 2 -ne 2 ']' 00:05:12.674 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.674 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.674 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.674 +++ basename /dev/fd/62 00:05:12.674 ++ mktemp /tmp/62.XXX 00:05:12.674 + tmp_file_1=/tmp/62.tr0 00:05:12.674 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.674 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.674 + tmp_file_2=/tmp/spdk_tgt_config.json.NWD 00:05:12.674 + ret=0 00:05:12.674 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.936 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.936 + diff -u /tmp/62.tr0 /tmp/spdk_tgt_config.json.NWD 00:05:12.936 + ret=1 00:05:12.936 + echo '=== Start of file: /tmp/62.tr0 ===' 00:05:12.936 + cat /tmp/62.tr0 00:05:12.936 + echo '=== End of file: /tmp/62.tr0 ===' 00:05:12.936 + echo '' 00:05:12.936 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NWD ===' 00:05:12.936 + cat /tmp/spdk_tgt_config.json.NWD 00:05:12.936 + echo '=== End of file: /tmp/spdk_tgt_config.json.NWD ===' 00:05:12.936 + echo '' 00:05:12.936 + rm /tmp/62.tr0 /tmp/spdk_tgt_config.json.NWD 00:05:12.936 + exit 1 00:05:12.936 08:43:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:12.936 INFO: configuration change detected. 00:05:12.936 08:43:35 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:12.936 08:43:35 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:12.936 08:43:35 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:12.936 08:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@317 -- # [[ -n 2356937 ]] 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.937 08:43:35 json_config -- json_config/json_config.sh@323 -- # killprocess 2356937 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@949 -- # '[' -z 2356937 ']' 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@953 -- # kill -0 2356937 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@954 -- # uname 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2356937 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2356937' 00:05:12.937 killing process with pid 2356937 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@968 -- # kill 2356937 00:05:12.937 08:43:35 json_config -- common/autotest_common.sh@973 -- # wait 2356937 00:05:13.196 08:43:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.458 08:43:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:13.458 08:43:35 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:13.458 08:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.458 08:43:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:13.458 08:43:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:13.458 INFO: Success 00:05:13.458 00:05:13.458 real 0m6.658s 00:05:13.458 user 0m8.148s 00:05:13.458 sys 0m1.530s 00:05:13.458 08:43:35 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.458 08:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.458 ************************************ 00:05:13.458 END TEST json_config 00:05:13.458 ************************************ 00:05:13.458 08:43:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.458 08:43:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.458 08:43:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.458 08:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:13.458 ************************************ 00:05:13.458 START TEST json_config_extra_key 00:05:13.458 ************************************ 00:05:13.458 08:43:35 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.458 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.458 08:43:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.458 08:43:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.458 08:43:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.458 08:43:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.458 08:43:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.458 08:43:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.459 08:43:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.459 08:43:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.459 08:43:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:13.459 08:43:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.459 INFO: launching applications... 00:05:13.459 08:43:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2357461 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.459 Waiting for target to run... 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2357461 /var/tmp/spdk_tgt.sock 00:05:13.459 08:43:35 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 2357461 ']' 00:05:13.459 08:43:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.459 08:43:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.459 08:43:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:13.459 08:43:35 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.459 08:43:35 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:13.459 08:43:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.720 [2024-06-09 08:43:36.028803] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:13.720 [2024-06-09 08:43:36.028883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2357461 ] 00:05:13.720 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.981 [2024-06-09 08:43:36.452314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.981 [2024-06-09 08:43:36.514489] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.552 08:43:36 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:14.552 08:43:36 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.552 00:05:14.552 08:43:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.552 INFO: shutting down applications... 00:05:14.552 08:43:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2357461 ]] 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2357461 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2357461 00:05:14.552 08:43:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.812 08:43:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.812 08:43:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.812 08:43:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2357461 00:05:14.812 08:43:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.812 08:43:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:14.812 08:43:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.812 08:43:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.812 SPDK target shutdown done 00:05:14.812 08:43:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:14.812 Success 00:05:14.812 00:05:14.812 real 0m1.449s 00:05:14.812 user 0m0.950s 00:05:14.812 sys 0m0.526s 00:05:14.812 08:43:37 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:14.812 08:43:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.812 ************************************ 00:05:14.812 END TEST json_config_extra_key 00:05:14.812 ************************************ 00:05:14.812 08:43:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.812 08:43:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.812 08:43:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.812 08:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:15.073 ************************************ 00:05:15.073 START TEST alias_rpc 00:05:15.073 ************************************ 00:05:15.073 08:43:37 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.073 * Looking for test storage... 00:05:15.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:15.073 08:43:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:15.073 08:43:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2357783 00:05:15.073 08:43:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2357783 00:05:15.073 08:43:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.073 08:43:37 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 2357783 ']' 00:05:15.073 08:43:37 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.073 08:43:37 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:15.073 08:43:37 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.073 08:43:37 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:15.073 08:43:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.073 [2024-06-09 08:43:37.543054] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:15.073 [2024-06-09 08:43:37.543114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2357783 ] 00:05:15.073 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.073 [2024-06-09 08:43:37.606339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.333 [2024-06-09 08:43:37.682743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.903 08:43:38 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:15.903 08:43:38 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:15.903 08:43:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.164 08:43:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2357783 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 2357783 ']' 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 2357783 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2357783 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2357783' 00:05:16.164 killing process with pid 2357783 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@968 -- # kill 2357783 00:05:16.164 08:43:38 alias_rpc -- common/autotest_common.sh@973 -- # wait 2357783 00:05:16.424 00:05:16.424 real 0m1.356s 00:05:16.424 user 0m1.494s 00:05:16.424 sys 0m0.351s 00:05:16.424 08:43:38 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:16.424 08:43:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.424 ************************************ 00:05:16.424 END TEST alias_rpc 00:05:16.424 ************************************ 00:05:16.424 08:43:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:16.424 08:43:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.424 08:43:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:16.424 08:43:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:16.424 08:43:38 -- common/autotest_common.sh@10 -- # set +x 00:05:16.424 ************************************ 00:05:16.424 START TEST spdkcli_tcp 00:05:16.424 ************************************ 00:05:16.424 08:43:38 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.424 * Looking for test storage... 00:05:16.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:16.424 08:43:38 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:16.424 08:43:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2358166 00:05:16.424 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2358166 00:05:16.425 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:16.425 08:43:38 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 2358166 ']' 00:05:16.425 08:43:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.425 08:43:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:16.425 08:43:38 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.425 08:43:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:16.425 08:43:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.686 [2024-06-09 08:43:39.000199] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:16.686 [2024-06-09 08:43:39.000281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358166 ] 00:05:16.686 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.686 [2024-06-09 08:43:39.067319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.686 [2024-06-09 08:43:39.141920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.686 [2024-06-09 08:43:39.141923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.265 08:43:39 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:17.265 08:43:39 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:17.265 08:43:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2358436 00:05:17.265 08:43:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:17.265 08:43:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:17.526 [ 00:05:17.526 "bdev_malloc_delete", 00:05:17.526 "bdev_malloc_create", 00:05:17.526 "bdev_null_resize", 00:05:17.526 "bdev_null_delete", 00:05:17.526 "bdev_null_create", 00:05:17.526 "bdev_nvme_cuse_unregister", 00:05:17.526 "bdev_nvme_cuse_register", 00:05:17.526 "bdev_opal_new_user", 00:05:17.526 "bdev_opal_set_lock_state", 00:05:17.526 "bdev_opal_delete", 00:05:17.526 "bdev_opal_get_info", 00:05:17.526 "bdev_opal_create", 00:05:17.526 "bdev_nvme_opal_revert", 00:05:17.526 "bdev_nvme_opal_init", 00:05:17.526 "bdev_nvme_send_cmd", 00:05:17.526 "bdev_nvme_get_path_iostat", 00:05:17.526 "bdev_nvme_get_mdns_discovery_info", 00:05:17.526 "bdev_nvme_stop_mdns_discovery", 00:05:17.526 "bdev_nvme_start_mdns_discovery", 00:05:17.526 "bdev_nvme_set_multipath_policy", 00:05:17.526 "bdev_nvme_set_preferred_path", 00:05:17.526 "bdev_nvme_get_io_paths", 00:05:17.526 "bdev_nvme_remove_error_injection", 00:05:17.526 "bdev_nvme_add_error_injection", 00:05:17.526 "bdev_nvme_get_discovery_info", 00:05:17.526 "bdev_nvme_stop_discovery", 00:05:17.526 "bdev_nvme_start_discovery", 00:05:17.526 "bdev_nvme_get_controller_health_info", 00:05:17.526 "bdev_nvme_disable_controller", 00:05:17.526 "bdev_nvme_enable_controller", 00:05:17.526 "bdev_nvme_reset_controller", 00:05:17.526 "bdev_nvme_get_transport_statistics", 00:05:17.526 "bdev_nvme_apply_firmware", 00:05:17.526 "bdev_nvme_detach_controller", 00:05:17.526 "bdev_nvme_get_controllers", 00:05:17.526 "bdev_nvme_attach_controller", 00:05:17.526 "bdev_nvme_set_hotplug", 00:05:17.526 "bdev_nvme_set_options", 00:05:17.526 "bdev_passthru_delete", 00:05:17.526 "bdev_passthru_create", 00:05:17.526 "bdev_lvol_set_parent_bdev", 00:05:17.526 "bdev_lvol_set_parent", 00:05:17.526 "bdev_lvol_check_shallow_copy", 00:05:17.526 "bdev_lvol_start_shallow_copy", 00:05:17.526 "bdev_lvol_grow_lvstore", 00:05:17.526 "bdev_lvol_get_lvols", 00:05:17.526 "bdev_lvol_get_lvstores", 00:05:17.526 "bdev_lvol_delete", 00:05:17.526 "bdev_lvol_set_read_only", 00:05:17.526 "bdev_lvol_resize", 00:05:17.526 "bdev_lvol_decouple_parent", 00:05:17.526 "bdev_lvol_inflate", 00:05:17.526 "bdev_lvol_rename", 00:05:17.526 "bdev_lvol_clone_bdev", 00:05:17.526 "bdev_lvol_clone", 00:05:17.526 "bdev_lvol_snapshot", 00:05:17.526 "bdev_lvol_create", 00:05:17.526 "bdev_lvol_delete_lvstore", 00:05:17.526 "bdev_lvol_rename_lvstore", 00:05:17.526 "bdev_lvol_create_lvstore", 00:05:17.526 "bdev_raid_set_options", 00:05:17.526 "bdev_raid_remove_base_bdev", 00:05:17.526 "bdev_raid_add_base_bdev", 00:05:17.526 "bdev_raid_delete", 00:05:17.526 "bdev_raid_create", 00:05:17.526 "bdev_raid_get_bdevs", 00:05:17.526 "bdev_error_inject_error", 00:05:17.526 "bdev_error_delete", 00:05:17.526 "bdev_error_create", 00:05:17.526 "bdev_split_delete", 00:05:17.526 "bdev_split_create", 00:05:17.526 "bdev_delay_delete", 00:05:17.526 "bdev_delay_create", 00:05:17.526 "bdev_delay_update_latency", 00:05:17.526 "bdev_zone_block_delete", 00:05:17.526 "bdev_zone_block_create", 00:05:17.526 "blobfs_create", 00:05:17.526 "blobfs_detect", 00:05:17.526 "blobfs_set_cache_size", 00:05:17.526 "bdev_aio_delete", 00:05:17.526 "bdev_aio_rescan", 00:05:17.526 "bdev_aio_create", 00:05:17.526 "bdev_ftl_set_property", 00:05:17.526 "bdev_ftl_get_properties", 00:05:17.526 "bdev_ftl_get_stats", 00:05:17.526 "bdev_ftl_unmap", 00:05:17.526 "bdev_ftl_unload", 00:05:17.526 "bdev_ftl_delete", 00:05:17.526 "bdev_ftl_load", 00:05:17.526 "bdev_ftl_create", 00:05:17.526 "bdev_virtio_attach_controller", 00:05:17.526 "bdev_virtio_scsi_get_devices", 00:05:17.526 "bdev_virtio_detach_controller", 00:05:17.526 "bdev_virtio_blk_set_hotplug", 00:05:17.526 "bdev_iscsi_delete", 00:05:17.526 "bdev_iscsi_create", 00:05:17.526 "bdev_iscsi_set_options", 00:05:17.526 "accel_error_inject_error", 00:05:17.526 "ioat_scan_accel_module", 00:05:17.526 "dsa_scan_accel_module", 00:05:17.526 "iaa_scan_accel_module", 00:05:17.526 "keyring_file_remove_key", 00:05:17.526 "keyring_file_add_key", 00:05:17.526 "keyring_linux_set_options", 00:05:17.526 "iscsi_get_histogram", 00:05:17.526 "iscsi_enable_histogram", 00:05:17.526 "iscsi_set_options", 00:05:17.526 "iscsi_get_auth_groups", 00:05:17.526 "iscsi_auth_group_remove_secret", 00:05:17.526 "iscsi_auth_group_add_secret", 00:05:17.526 "iscsi_delete_auth_group", 00:05:17.526 "iscsi_create_auth_group", 00:05:17.526 "iscsi_set_discovery_auth", 00:05:17.526 "iscsi_get_options", 00:05:17.526 "iscsi_target_node_request_logout", 00:05:17.526 "iscsi_target_node_set_redirect", 00:05:17.527 "iscsi_target_node_set_auth", 00:05:17.527 "iscsi_target_node_add_lun", 00:05:17.527 "iscsi_get_stats", 00:05:17.527 "iscsi_get_connections", 00:05:17.527 "iscsi_portal_group_set_auth", 00:05:17.527 "iscsi_start_portal_group", 00:05:17.527 "iscsi_delete_portal_group", 00:05:17.527 "iscsi_create_portal_group", 00:05:17.527 "iscsi_get_portal_groups", 00:05:17.527 "iscsi_delete_target_node", 00:05:17.527 "iscsi_target_node_remove_pg_ig_maps", 00:05:17.527 "iscsi_target_node_add_pg_ig_maps", 00:05:17.527 "iscsi_create_target_node", 00:05:17.527 "iscsi_get_target_nodes", 00:05:17.527 "iscsi_delete_initiator_group", 00:05:17.527 "iscsi_initiator_group_remove_initiators", 00:05:17.527 "iscsi_initiator_group_add_initiators", 00:05:17.527 "iscsi_create_initiator_group", 00:05:17.527 "iscsi_get_initiator_groups", 00:05:17.527 "nvmf_set_crdt", 00:05:17.527 "nvmf_set_config", 00:05:17.527 "nvmf_set_max_subsystems", 00:05:17.527 "nvmf_stop_mdns_prr", 00:05:17.527 "nvmf_publish_mdns_prr", 00:05:17.527 "nvmf_subsystem_get_listeners", 00:05:17.527 "nvmf_subsystem_get_qpairs", 00:05:17.527 "nvmf_subsystem_get_controllers", 00:05:17.527 "nvmf_get_stats", 00:05:17.527 "nvmf_get_transports", 00:05:17.527 "nvmf_create_transport", 00:05:17.527 "nvmf_get_targets", 00:05:17.527 "nvmf_delete_target", 00:05:17.527 "nvmf_create_target", 00:05:17.527 "nvmf_subsystem_allow_any_host", 00:05:17.527 "nvmf_subsystem_remove_host", 00:05:17.527 "nvmf_subsystem_add_host", 00:05:17.527 "nvmf_ns_remove_host", 00:05:17.527 "nvmf_ns_add_host", 00:05:17.527 "nvmf_subsystem_remove_ns", 00:05:17.527 "nvmf_subsystem_add_ns", 00:05:17.527 "nvmf_subsystem_listener_set_ana_state", 00:05:17.527 "nvmf_discovery_get_referrals", 00:05:17.527 "nvmf_discovery_remove_referral", 00:05:17.527 "nvmf_discovery_add_referral", 00:05:17.527 "nvmf_subsystem_remove_listener", 00:05:17.527 "nvmf_subsystem_add_listener", 00:05:17.527 "nvmf_delete_subsystem", 00:05:17.527 "nvmf_create_subsystem", 00:05:17.527 "nvmf_get_subsystems", 00:05:17.527 "env_dpdk_get_mem_stats", 00:05:17.527 "nbd_get_disks", 00:05:17.527 "nbd_stop_disk", 00:05:17.527 "nbd_start_disk", 00:05:17.527 "ublk_recover_disk", 00:05:17.527 "ublk_get_disks", 00:05:17.527 "ublk_stop_disk", 00:05:17.527 "ublk_start_disk", 00:05:17.527 "ublk_destroy_target", 00:05:17.527 "ublk_create_target", 00:05:17.527 "virtio_blk_create_transport", 00:05:17.527 "virtio_blk_get_transports", 00:05:17.527 "vhost_controller_set_coalescing", 00:05:17.527 "vhost_get_controllers", 00:05:17.527 "vhost_delete_controller", 00:05:17.527 "vhost_create_blk_controller", 00:05:17.527 "vhost_scsi_controller_remove_target", 00:05:17.527 "vhost_scsi_controller_add_target", 00:05:17.527 "vhost_start_scsi_controller", 00:05:17.527 "vhost_create_scsi_controller", 00:05:17.527 "thread_set_cpumask", 00:05:17.527 "framework_get_scheduler", 00:05:17.527 "framework_set_scheduler", 00:05:17.527 "framework_get_reactors", 00:05:17.527 "thread_get_io_channels", 00:05:17.527 "thread_get_pollers", 00:05:17.527 "thread_get_stats", 00:05:17.527 "framework_monitor_context_switch", 00:05:17.527 "spdk_kill_instance", 00:05:17.527 "log_enable_timestamps", 00:05:17.527 "log_get_flags", 00:05:17.527 "log_clear_flag", 00:05:17.527 "log_set_flag", 00:05:17.527 "log_get_level", 00:05:17.527 "log_set_level", 00:05:17.527 "log_get_print_level", 00:05:17.527 "log_set_print_level", 00:05:17.527 "framework_enable_cpumask_locks", 00:05:17.527 "framework_disable_cpumask_locks", 00:05:17.527 "framework_wait_init", 00:05:17.527 "framework_start_init", 00:05:17.527 "scsi_get_devices", 00:05:17.527 "bdev_get_histogram", 00:05:17.527 "bdev_enable_histogram", 00:05:17.527 "bdev_set_qos_limit", 00:05:17.527 "bdev_set_qd_sampling_period", 00:05:17.527 "bdev_get_bdevs", 00:05:17.527 "bdev_reset_iostat", 00:05:17.527 "bdev_get_iostat", 00:05:17.527 "bdev_examine", 00:05:17.527 "bdev_wait_for_examine", 00:05:17.527 "bdev_set_options", 00:05:17.527 "notify_get_notifications", 00:05:17.527 "notify_get_types", 00:05:17.527 "accel_get_stats", 00:05:17.527 "accel_set_options", 00:05:17.527 "accel_set_driver", 00:05:17.527 "accel_crypto_key_destroy", 00:05:17.527 "accel_crypto_keys_get", 00:05:17.527 "accel_crypto_key_create", 00:05:17.527 "accel_assign_opc", 00:05:17.527 "accel_get_module_info", 00:05:17.527 "accel_get_opc_assignments", 00:05:17.527 "vmd_rescan", 00:05:17.527 "vmd_remove_device", 00:05:17.527 "vmd_enable", 00:05:17.527 "sock_get_default_impl", 00:05:17.527 "sock_set_default_impl", 00:05:17.527 "sock_impl_set_options", 00:05:17.527 "sock_impl_get_options", 00:05:17.527 "iobuf_get_stats", 00:05:17.527 "iobuf_set_options", 00:05:17.527 "framework_get_pci_devices", 00:05:17.527 "framework_get_config", 00:05:17.527 "framework_get_subsystems", 00:05:17.527 "trace_get_info", 00:05:17.527 "trace_get_tpoint_group_mask", 00:05:17.527 "trace_disable_tpoint_group", 00:05:17.527 "trace_enable_tpoint_group", 00:05:17.527 "trace_clear_tpoint_mask", 00:05:17.527 "trace_set_tpoint_mask", 00:05:17.527 "keyring_get_keys", 00:05:17.527 "spdk_get_version", 00:05:17.527 "rpc_get_methods" 00:05:17.527 ] 00:05:17.527 08:43:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:17.527 08:43:39 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:17.527 08:43:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.527 08:43:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:17.527 08:43:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2358166 00:05:17.527 08:43:39 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 2358166 ']' 00:05:17.527 08:43:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 2358166 00:05:17.527 08:43:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:17.527 08:43:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:17.527 08:43:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2358166 00:05:17.527 08:43:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:17.527 08:43:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:17.527 08:43:40 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2358166' 00:05:17.527 killing process with pid 2358166 00:05:17.527 08:43:40 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 2358166 00:05:17.527 08:43:40 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 2358166 00:05:17.788 00:05:17.788 real 0m1.394s 00:05:17.788 user 0m2.538s 00:05:17.788 sys 0m0.413s 00:05:17.788 08:43:40 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.788 08:43:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.788 ************************************ 00:05:17.788 END TEST spdkcli_tcp 00:05:17.788 ************************************ 00:05:17.788 08:43:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.788 08:43:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:17.788 08:43:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.788 08:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.788 ************************************ 00:05:17.788 START TEST dpdk_mem_utility 00:05:17.788 ************************************ 00:05:17.788 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.048 * Looking for test storage... 00:05:18.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:18.048 08:43:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:18.048 08:43:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2358573 00:05:18.048 08:43:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2358573 00:05:18.048 08:43:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.048 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 2358573 ']' 00:05:18.048 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.048 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:18.048 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.048 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:18.048 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.048 [2024-06-09 08:43:40.448744] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:18.048 [2024-06-09 08:43:40.448803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358573 ] 00:05:18.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.048 [2024-06-09 08:43:40.512493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.048 [2024-06-09 08:43:40.587422] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.992 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:18.992 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:18.992 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:18.993 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:18.993 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.993 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.993 { 00:05:18.993 "filename": "/tmp/spdk_mem_dump.txt" 00:05:18.993 } 00:05:18.993 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.993 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:18.993 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:18.993 1 heaps totaling size 814.000000 MiB 00:05:18.993 size: 814.000000 MiB heap id: 0 00:05:18.993 end heaps---------- 00:05:18.993 8 mempools totaling size 598.116089 MiB 00:05:18.993 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:18.993 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:18.993 size: 84.521057 MiB name: bdev_io_2358573 00:05:18.993 size: 51.011292 MiB name: evtpool_2358573 00:05:18.993 size: 50.003479 MiB name: msgpool_2358573 00:05:18.993 size: 21.763794 MiB name: PDU_Pool 00:05:18.993 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:18.993 size: 0.026123 MiB name: Session_Pool 00:05:18.993 end mempools------- 00:05:18.993 6 memzones totaling size 4.142822 MiB 00:05:18.993 size: 1.000366 MiB name: RG_ring_0_2358573 00:05:18.993 size: 1.000366 MiB name: RG_ring_1_2358573 00:05:18.993 size: 1.000366 MiB name: RG_ring_4_2358573 00:05:18.993 size: 1.000366 MiB name: RG_ring_5_2358573 00:05:18.993 size: 0.125366 MiB name: RG_ring_2_2358573 00:05:18.993 size: 0.015991 MiB name: RG_ring_3_2358573 00:05:18.993 end memzones------- 00:05:18.993 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:18.993 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:18.993 list of free elements. size: 12.519348 MiB 00:05:18.993 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:18.993 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:18.993 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:18.993 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:18.993 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:18.993 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:18.993 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:18.993 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:18.993 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:18.993 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:18.993 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:18.993 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:18.993 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:18.993 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:18.993 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:18.993 list of standard malloc elements. size: 199.218079 MiB 00:05:18.993 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:18.993 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:18.993 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:18.993 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:18.993 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:18.993 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:18.993 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:18.993 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:18.993 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:18.993 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:18.993 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:18.993 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:18.993 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:18.993 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:18.993 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:18.993 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:18.993 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:18.993 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:18.993 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:18.993 list of memzone associated elements. size: 602.262573 MiB 00:05:18.993 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:18.993 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:18.993 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:18.993 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:18.993 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:18.993 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2358573_0 00:05:18.993 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:18.993 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2358573_0 00:05:18.993 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:18.993 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2358573_0 00:05:18.993 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:18.993 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:18.993 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:18.993 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:18.993 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:18.993 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2358573 00:05:18.993 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:18.993 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2358573 00:05:18.993 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:18.993 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2358573 00:05:18.993 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:18.993 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:18.993 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:18.993 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:18.993 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:18.993 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:18.993 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:18.993 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:18.993 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:18.993 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2358573 00:05:18.993 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:18.993 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2358573 00:05:18.993 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:18.993 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2358573 00:05:18.993 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:18.993 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2358573 00:05:18.993 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:18.993 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2358573 00:05:18.993 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:18.993 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:18.993 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:18.993 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:18.993 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:18.993 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:18.993 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:18.993 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2358573 00:05:18.993 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:18.993 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:18.993 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:18.993 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:18.993 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:18.993 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2358573 00:05:18.993 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:18.993 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:18.993 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:18.993 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2358573 00:05:18.993 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:18.993 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2358573 00:05:18.993 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:18.993 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:18.993 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:18.994 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2358573 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 2358573 ']' 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 2358573 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2358573 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2358573' 00:05:18.994 killing process with pid 2358573 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 2358573 00:05:18.994 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 2358573 00:05:19.255 00:05:19.255 real 0m1.276s 00:05:19.255 user 0m1.353s 00:05:19.255 sys 0m0.357s 00:05:19.255 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:19.255 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.256 ************************************ 00:05:19.256 END TEST dpdk_mem_utility 00:05:19.256 ************************************ 00:05:19.256 08:43:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.256 08:43:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:19.256 08:43:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:19.256 08:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:19.256 ************************************ 00:05:19.256 START TEST event 00:05:19.256 ************************************ 00:05:19.256 08:43:41 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.256 * Looking for test storage... 00:05:19.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:19.256 08:43:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:19.256 08:43:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:19.256 08:43:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.256 08:43:41 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:19.256 08:43:41 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:19.256 08:43:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.256 ************************************ 00:05:19.256 START TEST event_perf 00:05:19.256 ************************************ 00:05:19.256 08:43:41 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.256 Running I/O for 1 seconds...[2024-06-09 08:43:41.800771] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:19.256 [2024-06-09 08:43:41.800848] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358960 ] 00:05:19.517 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.517 [2024-06-09 08:43:41.865028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.517 [2024-06-09 08:43:41.931549] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.517 [2024-06-09 08:43:41.931745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.517 [2024-06-09 08:43:41.931904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.517 [2024-06-09 08:43:41.931904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.493 Running I/O for 1 seconds... 00:05:20.493 lcore 0: 178189 00:05:20.493 lcore 1: 178190 00:05:20.493 lcore 2: 178188 00:05:20.493 lcore 3: 178191 00:05:20.493 done. 00:05:20.493 00:05:20.493 real 0m1.206s 00:05:20.493 user 0m4.131s 00:05:20.493 sys 0m0.073s 00:05:20.493 08:43:42 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.493 08:43:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.493 ************************************ 00:05:20.493 END TEST event_perf 00:05:20.493 ************************************ 00:05:20.493 08:43:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:20.493 08:43:43 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:20.493 08:43:43 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.493 08:43:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.762 ************************************ 00:05:20.762 START TEST event_reactor 00:05:20.762 ************************************ 00:05:20.762 08:43:43 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:20.762 [2024-06-09 08:43:43.081493] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:20.762 [2024-06-09 08:43:43.081579] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359230 ] 00:05:20.762 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.762 [2024-06-09 08:43:43.145450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.762 [2024-06-09 08:43:43.213175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.144 test_start 00:05:22.144 oneshot 00:05:22.144 tick 100 00:05:22.144 tick 100 00:05:22.144 tick 250 00:05:22.144 tick 100 00:05:22.144 tick 100 00:05:22.144 tick 100 00:05:22.144 tick 250 00:05:22.144 tick 500 00:05:22.144 tick 100 00:05:22.144 tick 100 00:05:22.144 tick 250 00:05:22.144 tick 100 00:05:22.144 tick 100 00:05:22.144 test_end 00:05:22.144 00:05:22.144 real 0m1.204s 00:05:22.144 user 0m1.128s 00:05:22.144 sys 0m0.072s 00:05:22.144 08:43:44 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:22.144 08:43:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.144 ************************************ 00:05:22.144 END TEST event_reactor 00:05:22.144 ************************************ 00:05:22.144 08:43:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.144 08:43:44 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:22.144 08:43:44 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.144 08:43:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.144 ************************************ 00:05:22.144 START TEST event_reactor_perf 00:05:22.144 ************************************ 00:05:22.144 08:43:44 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.144 [2024-06-09 08:43:44.357458] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:22.144 [2024-06-09 08:43:44.357556] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359379 ] 00:05:22.144 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.144 [2024-06-09 08:43:44.421736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.144 [2024-06-09 08:43:44.489713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.086 test_start 00:05:23.086 test_end 00:05:23.086 Performance: 370076 events per second 00:05:23.086 00:05:23.086 real 0m1.206s 00:05:23.086 user 0m1.126s 00:05:23.086 sys 0m0.075s 00:05:23.086 08:43:45 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:23.086 08:43:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.086 ************************************ 00:05:23.086 END TEST event_reactor_perf 00:05:23.086 ************************************ 00:05:23.086 08:43:45 event -- event/event.sh@49 -- # uname -s 00:05:23.086 08:43:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.086 08:43:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.086 08:43:45 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:23.086 08:43:45 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:23.086 08:43:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.086 ************************************ 00:05:23.086 START TEST event_scheduler 00:05:23.086 ************************************ 00:05:23.086 08:43:45 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.348 * Looking for test storage... 00:05:23.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:23.348 08:43:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:23.348 08:43:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2359733 00:05:23.348 08:43:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.348 08:43:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:23.348 08:43:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2359733 00:05:23.348 08:43:45 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 2359733 ']' 00:05:23.348 08:43:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.348 08:43:45 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:23.348 08:43:45 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.348 08:43:45 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:23.348 08:43:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.348 [2024-06-09 08:43:45.771255] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:23.348 [2024-06-09 08:43:45.771323] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359733 ] 00:05:23.348 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.348 [2024-06-09 08:43:45.828818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.348 [2024-06-09 08:43:45.893318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.348 [2024-06-09 08:43:45.893479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.348 [2024-06-09 08:43:45.893557] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.348 [2024-06-09 08:43:45.893559] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:24.291 08:43:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.291 POWER: Env isn't set yet! 00:05:24.291 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:24.291 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.291 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.291 POWER: Attempting to initialise PSTAT power management... 00:05:24.291 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:24.291 POWER: Initialized successfully for lcore 0 power management 00:05:24.291 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:24.291 POWER: Initialized successfully for lcore 1 power management 00:05:24.291 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:24.291 POWER: Initialized successfully for lcore 2 power management 00:05:24.291 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:24.291 POWER: Initialized successfully for lcore 3 power management 00:05:24.291 [2024-06-09 08:43:46.597634] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.291 [2024-06-09 08:43:46.597643] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.291 [2024-06-09 08:43:46.597647] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.291 08:43:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.291 [2024-06-09 08:43:46.658468] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.291 08:43:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:24.291 08:43:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.291 ************************************ 00:05:24.291 START TEST scheduler_create_thread 00:05:24.291 ************************************ 00:05:24.291 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:24.291 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 2 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 3 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 4 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 5 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 6 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 7 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 8 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 9 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.292 08:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.678 10 00:05:25.679 08:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.679 08:43:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.679 08:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.679 08:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.621 08:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.621 08:43:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:26.621 08:43:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:26.621 08:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.621 08:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.193 08:43:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.193 08:43:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:27.193 08:43:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.193 08:43:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.764 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.764 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:27.764 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:27.764 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.764 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.706 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.706 00:05:28.706 real 0m4.213s 00:05:28.706 user 0m0.026s 00:05:28.706 sys 0m0.004s 00:05:28.706 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.706 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.706 ************************************ 00:05:28.706 END TEST scheduler_create_thread 00:05:28.706 ************************************ 00:05:28.706 08:43:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.706 08:43:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2359733 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 2359733 ']' 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 2359733 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2359733 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2359733' 00:05:28.706 killing process with pid 2359733 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 2359733 00:05:28.706 08:43:50 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 2359733 00:05:28.706 [2024-06-09 08:43:51.186534] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.968 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:28.968 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:28.968 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:28.968 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:28.968 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:28.968 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:28.968 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:28.968 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:28.968 00:05:28.968 real 0m5.754s 00:05:28.968 user 0m13.331s 00:05:28.968 sys 0m0.379s 00:05:28.968 08:43:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.968 08:43:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.968 ************************************ 00:05:28.968 END TEST event_scheduler 00:05:28.968 ************************************ 00:05:28.968 08:43:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.968 08:43:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.968 08:43:51 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.968 08:43:51 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.968 08:43:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.968 ************************************ 00:05:28.968 START TEST app_repeat 00:05:28.968 ************************************ 00:05:28.968 08:43:51 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2361019 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2361019' 00:05:28.968 Process app_repeat pid: 2361019 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.968 spdk_app_start Round 0 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2361019 /var/tmp/spdk-nbd.sock 00:05:28.968 08:43:51 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2361019 ']' 00:05:28.968 08:43:51 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.968 08:43:51 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:28.968 08:43:51 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.968 08:43:51 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:28.968 08:43:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.968 08:43:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.968 [2024-06-09 08:43:51.484894] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:28.968 [2024-06-09 08:43:51.484957] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361019 ] 00:05:28.968 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.229 [2024-06-09 08:43:51.547364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.229 [2024-06-09 08:43:51.618811] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.229 [2024-06-09 08:43:51.618813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.802 08:43:52 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:29.802 08:43:52 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:29.802 08:43:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.063 Malloc0 00:05:30.063 08:43:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.063 Malloc1 00:05:30.063 08:43:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.063 08:43:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.325 /dev/nbd0 00:05:30.325 08:43:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.325 08:43:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.325 1+0 records in 00:05:30.325 1+0 records out 00:05:30.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242554 s, 16.9 MB/s 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:30.325 08:43:52 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:30.325 08:43:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.325 08:43:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.325 08:43:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.586 /dev/nbd1 00:05:30.586 08:43:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.586 08:43:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.586 08:43:52 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:30.586 08:43:52 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:30.586 08:43:52 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:30.586 08:43:52 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:30.587 08:43:52 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:30.587 08:43:52 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:30.587 08:43:52 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:30.587 08:43:52 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:30.587 08:43:52 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.587 1+0 records in 00:05:30.587 1+0 records out 00:05:30.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272442 s, 15.0 MB/s 00:05:30.587 08:43:52 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.587 08:43:52 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:30.587 08:43:52 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.587 08:43:53 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:30.587 08:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:30.587 08:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.587 08:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.587 08:43:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.587 08:43:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.587 08:43:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.847 { 00:05:30.847 "nbd_device": "/dev/nbd0", 00:05:30.847 "bdev_name": "Malloc0" 00:05:30.847 }, 00:05:30.847 { 00:05:30.847 "nbd_device": "/dev/nbd1", 00:05:30.847 "bdev_name": "Malloc1" 00:05:30.847 } 00:05:30.847 ]' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.847 { 00:05:30.847 "nbd_device": "/dev/nbd0", 00:05:30.847 "bdev_name": "Malloc0" 00:05:30.847 }, 00:05:30.847 { 00:05:30.847 "nbd_device": "/dev/nbd1", 00:05:30.847 "bdev_name": "Malloc1" 00:05:30.847 } 00:05:30.847 ]' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.847 /dev/nbd1' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.847 /dev/nbd1' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.847 256+0 records in 00:05:30.847 256+0 records out 00:05:30.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012438 s, 84.3 MB/s 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.847 256+0 records in 00:05:30.847 256+0 records out 00:05:30.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180664 s, 58.0 MB/s 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.847 256+0 records in 00:05:30.847 256+0 records out 00:05:30.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016566 s, 63.3 MB/s 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.847 08:43:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.108 08:43:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.370 08:43:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.370 08:43:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.630 08:43:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.630 [2024-06-09 08:43:54.147589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.891 [2024-06-09 08:43:54.210732] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.891 [2024-06-09 08:43:54.210734] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.891 [2024-06-09 08:43:54.241957] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.891 [2024-06-09 08:43:54.241989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.193 08:43:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.193 08:43:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.193 spdk_app_start Round 1 00:05:35.193 08:43:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2361019 /var/tmp/spdk-nbd.sock 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2361019 ']' 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:35.193 08:43:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.193 Malloc0 00:05:35.193 08:43:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.193 Malloc1 00:05:35.193 08:43:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.193 /dev/nbd0 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.193 1+0 records in 00:05:35.193 1+0 records out 00:05:35.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273471 s, 15.0 MB/s 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:35.193 08:43:57 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.193 08:43:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.454 /dev/nbd1 00:05:35.454 08:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.454 08:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.454 08:43:57 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:35.454 08:43:57 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:35.454 08:43:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:35.454 08:43:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:35.454 08:43:57 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:35.454 08:43:57 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:35.455 08:43:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:35.455 08:43:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:35.455 08:43:57 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.455 1+0 records in 00:05:35.455 1+0 records out 00:05:35.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280357 s, 14.6 MB/s 00:05:35.455 08:43:57 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.455 08:43:57 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:35.455 08:43:57 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.455 08:43:57 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:35.455 08:43:57 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:35.455 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.455 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.455 08:43:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.455 08:43:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.455 08:43:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.716 { 00:05:35.716 "nbd_device": "/dev/nbd0", 00:05:35.716 "bdev_name": "Malloc0" 00:05:35.716 }, 00:05:35.716 { 00:05:35.716 "nbd_device": "/dev/nbd1", 00:05:35.716 "bdev_name": "Malloc1" 00:05:35.716 } 00:05:35.716 ]' 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.716 { 00:05:35.716 "nbd_device": "/dev/nbd0", 00:05:35.716 "bdev_name": "Malloc0" 00:05:35.716 }, 00:05:35.716 { 00:05:35.716 "nbd_device": "/dev/nbd1", 00:05:35.716 "bdev_name": "Malloc1" 00:05:35.716 } 00:05:35.716 ]' 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.716 /dev/nbd1' 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.716 /dev/nbd1' 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.716 08:43:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.717 256+0 records in 00:05:35.717 256+0 records out 00:05:35.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116761 s, 89.8 MB/s 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.717 256+0 records in 00:05:35.717 256+0 records out 00:05:35.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159448 s, 65.8 MB/s 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.717 256+0 records in 00:05:35.717 256+0 records out 00:05:35.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173828 s, 60.3 MB/s 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.717 08:43:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.979 08:43:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.240 08:43:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.240 08:43:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.501 08:43:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.501 [2024-06-09 08:43:59.015493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.762 [2024-06-09 08:43:59.078033] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.762 [2024-06-09 08:43:59.078036] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.762 [2024-06-09 08:43:59.110172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.762 [2024-06-09 08:43:59.110206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.381 08:44:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.381 08:44:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:39.381 spdk_app_start Round 2 00:05:39.381 08:44:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2361019 /var/tmp/spdk-nbd.sock 00:05:39.381 08:44:01 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2361019 ']' 00:05:39.381 08:44:01 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.381 08:44:01 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:39.381 08:44:01 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.381 08:44:01 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:39.381 08:44:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.642 08:44:02 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:39.642 08:44:02 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:39.642 08:44:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.642 Malloc0 00:05:39.902 08:44:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.902 Malloc1 00:05:39.902 08:44:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.902 08:44:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.902 08:44:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.902 08:44:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.902 08:44:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.903 08:44:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.163 /dev/nbd0 00:05:40.163 08:44:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.163 08:44:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:40.163 08:44:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.163 1+0 records in 00:05:40.163 1+0 records out 00:05:40.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.4692e-05 s, 48.4 MB/s 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:40.164 08:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.164 08:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.164 08:44:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.164 /dev/nbd1 00:05:40.164 08:44:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.164 08:44:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:40.164 08:44:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.164 1+0 records in 00:05:40.164 1+0 records out 00:05:40.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00011789 s, 34.7 MB/s 00:05:40.425 08:44:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.425 08:44:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:40.425 08:44:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.425 08:44:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:40.425 08:44:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.425 { 00:05:40.425 "nbd_device": "/dev/nbd0", 00:05:40.425 "bdev_name": "Malloc0" 00:05:40.425 }, 00:05:40.425 { 00:05:40.425 "nbd_device": "/dev/nbd1", 00:05:40.425 "bdev_name": "Malloc1" 00:05:40.425 } 00:05:40.425 ]' 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.425 { 00:05:40.425 "nbd_device": "/dev/nbd0", 00:05:40.425 "bdev_name": "Malloc0" 00:05:40.425 }, 00:05:40.425 { 00:05:40.425 "nbd_device": "/dev/nbd1", 00:05:40.425 "bdev_name": "Malloc1" 00:05:40.425 } 00:05:40.425 ]' 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.425 /dev/nbd1' 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.425 /dev/nbd1' 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.425 256+0 records in 00:05:40.425 256+0 records out 00:05:40.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124707 s, 84.1 MB/s 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.425 256+0 records in 00:05:40.425 256+0 records out 00:05:40.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162694 s, 64.5 MB/s 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.425 08:44:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.685 256+0 records in 00:05:40.685 256+0 records out 00:05:40.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260725 s, 40.2 MB/s 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.685 08:44:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.686 08:44:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.946 08:44:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.207 08:44:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.207 08:44:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.207 08:44:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.468 [2024-06-09 08:44:03.872843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.468 [2024-06-09 08:44:03.936050] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.468 [2024-06-09 08:44:03.936053] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.468 [2024-06-09 08:44:03.967374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.468 [2024-06-09 08:44:03.967410] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.771 08:44:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2361019 /var/tmp/spdk-nbd.sock 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2361019 ']' 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:44.771 08:44:06 event.app_repeat -- event/event.sh@39 -- # killprocess 2361019 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 2361019 ']' 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 2361019 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2361019 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2361019' 00:05:44.771 killing process with pid 2361019 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@968 -- # kill 2361019 00:05:44.771 08:44:06 event.app_repeat -- common/autotest_common.sh@973 -- # wait 2361019 00:05:44.771 spdk_app_start is called in Round 0. 00:05:44.771 Shutdown signal received, stop current app iteration 00:05:44.771 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:44.771 spdk_app_start is called in Round 1. 00:05:44.771 Shutdown signal received, stop current app iteration 00:05:44.771 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:44.771 spdk_app_start is called in Round 2. 00:05:44.771 Shutdown signal received, stop current app iteration 00:05:44.771 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:44.771 spdk_app_start is called in Round 3. 00:05:44.771 Shutdown signal received, stop current app iteration 00:05:44.771 08:44:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:44.771 08:44:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:44.771 00:05:44.771 real 0m15.619s 00:05:44.771 user 0m33.582s 00:05:44.771 sys 0m2.144s 00:05:44.771 08:44:07 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.771 08:44:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.771 ************************************ 00:05:44.771 END TEST app_repeat 00:05:44.771 ************************************ 00:05:44.771 08:44:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:44.771 08:44:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:44.771 08:44:07 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:44.771 08:44:07 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.771 08:44:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.771 ************************************ 00:05:44.771 START TEST cpu_locks 00:05:44.771 ************************************ 00:05:44.771 08:44:07 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:44.771 * Looking for test storage... 00:05:44.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.771 08:44:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:44.771 08:44:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:44.771 08:44:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:44.772 08:44:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:44.772 08:44:07 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:44.772 08:44:07 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.772 08:44:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.772 ************************************ 00:05:44.772 START TEST default_locks 00:05:44.772 ************************************ 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2364372 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2364372 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2364372 ']' 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:44.772 08:44:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.033 [2024-06-09 08:44:07.336693] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:45.033 [2024-06-09 08:44:07.336755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364372 ] 00:05:45.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.033 [2024-06-09 08:44:07.399452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.033 [2024-06-09 08:44:07.463375] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.606 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:45.606 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:05:45.606 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2364372 00:05:45.607 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2364372 00:05:45.607 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.178 lslocks: write error 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2364372 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 2364372 ']' 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 2364372 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2364372 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2364372' 00:05:46.178 killing process with pid 2364372 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 2364372 00:05:46.178 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 2364372 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2364372 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2364372 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2364372 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2364372 ']' 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2364372) - No such process 00:05:46.439 ERROR: process (pid: 2364372) is no longer running 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.439 00:05:46.439 real 0m1.511s 00:05:46.439 user 0m1.601s 00:05:46.439 sys 0m0.493s 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:46.439 08:44:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 ************************************ 00:05:46.439 END TEST default_locks 00:05:46.439 ************************************ 00:05:46.439 08:44:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:46.439 08:44:08 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:46.439 08:44:08 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.439 08:44:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 ************************************ 00:05:46.439 START TEST default_locks_via_rpc 00:05:46.439 ************************************ 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2364741 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2364741 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2364741 ']' 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.439 08:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 [2024-06-09 08:44:08.926827] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:46.439 [2024-06-09 08:44:08.926877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364741 ] 00:05:46.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.439 [2024-06-09 08:44:08.984961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.699 [2024-06-09 08:44:09.049736] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2364741 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2364741 00:05:47.271 08:44:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.532 08:44:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2364741 00:05:47.532 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 2364741 ']' 00:05:47.532 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 2364741 00:05:47.532 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:05:47.532 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:47.532 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2364741 00:05:47.792 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:47.792 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:47.793 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2364741' 00:05:47.793 killing process with pid 2364741 00:05:47.793 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 2364741 00:05:47.793 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 2364741 00:05:47.793 00:05:47.793 real 0m1.466s 00:05:47.793 user 0m1.561s 00:05:47.793 sys 0m0.463s 00:05:47.793 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.793 08:44:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.793 ************************************ 00:05:47.793 END TEST default_locks_via_rpc 00:05:47.793 ************************************ 00:05:48.054 08:44:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:48.054 08:44:10 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:48.054 08:44:10 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.054 08:44:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.054 ************************************ 00:05:48.054 START TEST non_locking_app_on_locked_coremask 00:05:48.054 ************************************ 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2365101 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2365101 /var/tmp/spdk.sock 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2365101 ']' 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:48.054 08:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.054 [2024-06-09 08:44:10.458783] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:48.054 [2024-06-09 08:44:10.458831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365101 ] 00:05:48.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.054 [2024-06-09 08:44:10.517891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.054 [2024-06-09 08:44:10.585103] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2365138 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2365138 /var/tmp/spdk2.sock 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2365138 ']' 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:48.997 08:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.997 [2024-06-09 08:44:11.273074] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:48.997 [2024-06-09 08:44:11.273126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365138 ] 00:05:48.997 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.997 [2024-06-09 08:44:11.361140] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.997 [2024-06-09 08:44:11.361168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.997 [2024-06-09 08:44:11.490568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.572 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:49.572 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:49.572 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2365101 00:05:49.572 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2365101 00:05:49.572 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.145 lslocks: write error 00:05:50.145 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2365101 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2365101 ']' 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2365101 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2365101 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2365101' 00:05:50.146 killing process with pid 2365101 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2365101 00:05:50.146 08:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2365101 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2365138 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2365138 ']' 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2365138 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2365138 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2365138' 00:05:50.717 killing process with pid 2365138 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2365138 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2365138 00:05:50.717 00:05:50.717 real 0m2.872s 00:05:50.717 user 0m3.131s 00:05:50.717 sys 0m0.863s 00:05:50.717 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.978 08:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.978 ************************************ 00:05:50.978 END TEST non_locking_app_on_locked_coremask 00:05:50.978 ************************************ 00:05:50.978 08:44:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:50.978 08:44:13 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:50.978 08:44:13 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.978 08:44:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.978 ************************************ 00:05:50.978 START TEST locking_app_on_unlocked_coremask 00:05:50.978 ************************************ 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2365742 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2365742 /var/tmp/spdk.sock 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2365742 ']' 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:50.978 08:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.978 [2024-06-09 08:44:13.398654] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:50.978 [2024-06-09 08:44:13.398700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365742 ] 00:05:50.978 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.979 [2024-06-09 08:44:13.456912] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.979 [2024-06-09 08:44:13.456945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.979 [2024-06-09 08:44:13.520828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.922 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:51.922 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:51.922 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2365825 00:05:51.922 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.922 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2365825 /var/tmp/spdk2.sock 00:05:51.922 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2365825 ']' 00:05:51.923 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.923 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:51.923 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.923 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:51.923 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.923 [2024-06-09 08:44:14.216098] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:51.923 [2024-06-09 08:44:14.216153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365825 ] 00:05:51.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.923 [2024-06-09 08:44:14.305412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.923 [2024-06-09 08:44:14.438858] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.495 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:52.496 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:52.496 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2365825 00:05:52.496 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2365825 00:05:52.496 08:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.068 lslocks: write error 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2365742 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2365742 ']' 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2365742 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2365742 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2365742' 00:05:53.068 killing process with pid 2365742 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2365742 00:05:53.068 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2365742 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2365825 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2365825 ']' 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2365825 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2365825 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2365825' 00:05:53.640 killing process with pid 2365825 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2365825 00:05:53.640 08:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2365825 00:05:53.902 00:05:53.902 real 0m2.862s 00:05:53.902 user 0m3.103s 00:05:53.902 sys 0m0.855s 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.902 ************************************ 00:05:53.902 END TEST locking_app_on_unlocked_coremask 00:05:53.902 ************************************ 00:05:53.902 08:44:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:53.902 08:44:16 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.902 08:44:16 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.902 08:44:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.902 ************************************ 00:05:53.902 START TEST locking_app_on_locked_coremask 00:05:53.902 ************************************ 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2366222 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2366222 /var/tmp/spdk.sock 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2366222 ']' 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:53.902 08:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.902 [2024-06-09 08:44:16.334750] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:53.902 [2024-06-09 08:44:16.334800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366222 ] 00:05:53.902 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.902 [2024-06-09 08:44:16.394178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.902 [2024-06-09 08:44:16.460297] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2366529 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2366529 /var/tmp/spdk2.sock 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2366529 /var/tmp/spdk2.sock 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2366529 /var/tmp/spdk2.sock 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2366529 ']' 00:05:54.879 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.880 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:54.880 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.880 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:54.880 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.880 [2024-06-09 08:44:17.144612] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:54.880 [2024-06-09 08:44:17.144663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366529 ] 00:05:54.880 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.880 [2024-06-09 08:44:17.230555] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2366222 has claimed it. 00:05:54.880 [2024-06-09 08:44:17.230593] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2366529) - No such process 00:05:55.451 ERROR: process (pid: 2366529) is no longer running 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2366222 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2366222 00:05:55.451 08:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.712 lslocks: write error 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2366222 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2366222 ']' 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2366222 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2366222 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2366222' 00:05:55.712 killing process with pid 2366222 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2366222 00:05:55.712 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2366222 00:05:55.973 00:05:55.973 real 0m2.143s 00:05:55.973 user 0m2.383s 00:05:55.973 sys 0m0.556s 00:05:55.973 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:55.973 08:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.973 ************************************ 00:05:55.973 END TEST locking_app_on_locked_coremask 00:05:55.973 ************************************ 00:05:55.973 08:44:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:55.973 08:44:18 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:55.973 08:44:18 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:55.973 08:44:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.973 ************************************ 00:05:55.973 START TEST locking_overlapped_coremask 00:05:55.973 ************************************ 00:05:55.973 08:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2366873 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2366873 /var/tmp/spdk.sock 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2366873 ']' 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:55.974 08:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.235 [2024-06-09 08:44:18.553139] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:56.235 [2024-06-09 08:44:18.553189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366873 ] 00:05:56.235 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.235 [2024-06-09 08:44:18.613318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.235 [2024-06-09 08:44:18.682872] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.235 [2024-06-09 08:44:18.682987] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.235 [2024-06-09 08:44:18.682989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2366912 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2366912 /var/tmp/spdk2.sock 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2366912 /var/tmp/spdk2.sock 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2366912 /var/tmp/spdk2.sock 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2366912 ']' 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.807 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.068 [2024-06-09 08:44:19.381277] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:57.068 [2024-06-09 08:44:19.381331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366912 ] 00:05:57.068 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.068 [2024-06-09 08:44:19.450966] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2366873 has claimed it. 00:05:57.068 [2024-06-09 08:44:19.450995] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2366912) - No such process 00:05:57.639 ERROR: process (pid: 2366912) is no longer running 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2366873 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 2366873 ']' 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 2366873 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:57.639 08:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2366873 00:05:57.639 08:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:57.639 08:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:57.640 08:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2366873' 00:05:57.640 killing process with pid 2366873 00:05:57.640 08:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 2366873 00:05:57.640 08:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 2366873 00:05:57.900 00:05:57.900 real 0m1.752s 00:05:57.900 user 0m4.975s 00:05:57.900 sys 0m0.351s 00:05:57.900 08:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.900 08:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.900 ************************************ 00:05:57.900 END TEST locking_overlapped_coremask 00:05:57.900 ************************************ 00:05:57.900 08:44:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.900 08:44:20 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.900 08:44:20 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.900 08:44:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.900 ************************************ 00:05:57.900 START TEST locking_overlapped_coremask_via_rpc 00:05:57.900 ************************************ 00:05:57.900 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:05:57.900 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2367269 00:05:57.900 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2367269 /var/tmp/spdk.sock 00:05:57.901 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.901 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2367269 ']' 00:05:57.901 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.901 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:57.901 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.901 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:57.901 08:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.901 [2024-06-09 08:44:20.380586] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:57.901 [2024-06-09 08:44:20.380638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367269 ] 00:05:57.901 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.901 [2024-06-09 08:44:20.440551] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.901 [2024-06-09 08:44:20.440580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.161 [2024-06-09 08:44:20.513451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.161 [2024-06-09 08:44:20.513525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.161 [2024-06-09 08:44:20.513527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.733 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2367288 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2367288 /var/tmp/spdk2.sock 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2367288 ']' 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.734 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.734 [2024-06-09 08:44:21.207316] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:58.734 [2024-06-09 08:44:21.207367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367288 ] 00:05:58.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.734 [2024-06-09 08:44:21.279289] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.734 [2024-06-09 08:44:21.279308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.995 [2024-06-09 08:44:21.389244] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.995 [2024-06-09 08:44:21.389397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.995 [2024-06-09 08:44:21.389399] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.569 [2024-06-09 08:44:21.984459] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2367269 has claimed it. 00:05:59.569 request: 00:05:59.569 { 00:05:59.569 "method": "framework_enable_cpumask_locks", 00:05:59.569 "req_id": 1 00:05:59.569 } 00:05:59.569 Got JSON-RPC error response 00:05:59.569 response: 00:05:59.569 { 00:05:59.569 "code": -32603, 00:05:59.569 "message": "Failed to claim CPU core: 2" 00:05:59.569 } 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2367269 /var/tmp/spdk.sock 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2367269 ']' 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:59.569 08:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2367288 /var/tmp/spdk2.sock 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2367288 ']' 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.831 00:05:59.831 real 0m1.999s 00:05:59.831 user 0m0.787s 00:05:59.831 sys 0m0.134s 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.831 08:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.831 ************************************ 00:05:59.831 END TEST locking_overlapped_coremask_via_rpc 00:05:59.831 ************************************ 00:05:59.831 08:44:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.831 08:44:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2367269 ]] 00:05:59.831 08:44:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2367269 00:05:59.831 08:44:22 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2367269 ']' 00:05:59.831 08:44:22 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2367269 00:05:59.831 08:44:22 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:05:59.831 08:44:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:59.831 08:44:22 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2367269 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2367269' 00:06:00.092 killing process with pid 2367269 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2367269 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2367269 00:06:00.092 08:44:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2367288 ]] 00:06:00.092 08:44:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2367288 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2367288 ']' 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2367288 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:00.092 08:44:22 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2367288 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2367288' 00:06:00.354 killing process with pid 2367288 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2367288 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2367288 00:06:00.354 08:44:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.354 08:44:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:00.354 08:44:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2367269 ]] 00:06:00.354 08:44:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2367269 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2367269 ']' 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2367269 00:06:00.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2367269) - No such process 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2367269 is not found' 00:06:00.354 Process with pid 2367269 is not found 00:06:00.354 08:44:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2367288 ]] 00:06:00.354 08:44:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2367288 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2367288 ']' 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2367288 00:06:00.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2367288) - No such process 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2367288 is not found' 00:06:00.354 Process with pid 2367288 is not found 00:06:00.354 08:44:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.354 00:06:00.354 real 0m15.737s 00:06:00.354 user 0m27.073s 00:06:00.354 sys 0m4.562s 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:00.354 08:44:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.354 ************************************ 00:06:00.354 END TEST cpu_locks 00:06:00.354 ************************************ 00:06:00.616 00:06:00.617 real 0m41.279s 00:06:00.617 user 1m20.582s 00:06:00.617 sys 0m7.675s 00:06:00.617 08:44:22 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:00.617 08:44:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.617 ************************************ 00:06:00.617 END TEST event 00:06:00.617 ************************************ 00:06:00.617 08:44:22 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:00.617 08:44:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:00.617 08:44:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:00.617 08:44:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.617 ************************************ 00:06:00.617 START TEST thread 00:06:00.617 ************************************ 00:06:00.617 08:44:22 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:00.617 * Looking for test storage... 00:06:00.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:00.617 08:44:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.617 08:44:23 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:00.617 08:44:23 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:00.617 08:44:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.617 ************************************ 00:06:00.617 START TEST thread_poller_perf 00:06:00.617 ************************************ 00:06:00.617 08:44:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.617 [2024-06-09 08:44:23.161831] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:00.617 [2024-06-09 08:44:23.161924] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367809 ] 00:06:00.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.922 [2024-06-09 08:44:23.228689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.922 [2024-06-09 08:44:23.302931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.922 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.865 ====================================== 00:06:01.865 busy:2410590562 (cyc) 00:06:01.865 total_run_count: 287000 00:06:01.865 tsc_hz: 2400000000 (cyc) 00:06:01.865 ====================================== 00:06:01.865 poller_cost: 8399 (cyc), 3499 (nsec) 00:06:01.865 00:06:01.865 real 0m1.224s 00:06:01.865 user 0m1.137s 00:06:01.865 sys 0m0.082s 00:06:01.865 08:44:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.865 08:44:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.865 ************************************ 00:06:01.865 END TEST thread_poller_perf 00:06:01.865 ************************************ 00:06:01.865 08:44:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.865 08:44:24 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:01.865 08:44:24 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:01.865 08:44:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.127 ************************************ 00:06:02.127 START TEST thread_poller_perf 00:06:02.127 ************************************ 00:06:02.127 08:44:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.127 [2024-06-09 08:44:24.465363] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:02.127 [2024-06-09 08:44:24.465498] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368077 ] 00:06:02.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.127 [2024-06-09 08:44:24.535931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.127 [2024-06-09 08:44:24.605492] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.127 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.510 ====================================== 00:06:03.510 busy:2401746240 (cyc) 00:06:03.510 total_run_count: 3810000 00:06:03.510 tsc_hz: 2400000000 (cyc) 00:06:03.510 ====================================== 00:06:03.510 poller_cost: 630 (cyc), 262 (nsec) 00:06:03.510 00:06:03.510 real 0m1.217s 00:06:03.510 user 0m1.134s 00:06:03.510 sys 0m0.078s 00:06:03.510 08:44:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.510 08:44:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.510 ************************************ 00:06:03.510 END TEST thread_poller_perf 00:06:03.510 ************************************ 00:06:03.510 08:44:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:03.510 00:06:03.510 real 0m2.699s 00:06:03.510 user 0m2.366s 00:06:03.510 sys 0m0.339s 00:06:03.510 08:44:25 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.510 08:44:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.510 ************************************ 00:06:03.510 END TEST thread 00:06:03.510 ************************************ 00:06:03.510 08:44:25 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:03.510 08:44:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.510 08:44:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.510 08:44:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.510 ************************************ 00:06:03.510 START TEST accel 00:06:03.510 ************************************ 00:06:03.510 08:44:25 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:03.510 * Looking for test storage... 00:06:03.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:03.510 08:44:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:03.510 08:44:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:03.510 08:44:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.510 08:44:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2368464 00:06:03.510 08:44:25 accel -- accel/accel.sh@63 -- # waitforlisten 2368464 00:06:03.510 08:44:25 accel -- common/autotest_common.sh@830 -- # '[' -z 2368464 ']' 00:06:03.510 08:44:25 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.510 08:44:25 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.511 08:44:25 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.511 08:44:25 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:03.511 08:44:25 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.511 08:44:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:03.511 08:44:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.511 08:44:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.511 08:44:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.511 08:44:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.511 08:44:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.511 08:44:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.511 08:44:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:03.511 08:44:25 accel -- accel/accel.sh@41 -- # jq -r . 00:06:03.511 [2024-06-09 08:44:25.933879] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:03.511 [2024-06-09 08:44:25.933950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368464 ] 00:06:03.511 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.511 [2024-06-09 08:44:25.999785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.771 [2024-06-09 08:44:26.074277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@863 -- # return 0 00:06:04.343 08:44:26 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:04.343 08:44:26 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:04.343 08:44:26 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:04.343 08:44:26 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:04.343 08:44:26 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:04.343 08:44:26 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:04.343 08:44:26 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.343 08:44:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.343 08:44:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.343 08:44:26 accel -- accel/accel.sh@75 -- # killprocess 2368464 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@949 -- # '[' -z 2368464 ']' 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@953 -- # kill -0 2368464 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@954 -- # uname 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2368464 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2368464' 00:06:04.343 killing process with pid 2368464 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@968 -- # kill 2368464 00:06:04.343 08:44:26 accel -- common/autotest_common.sh@973 -- # wait 2368464 00:06:04.605 08:44:27 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:04.605 08:44:27 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:04.605 08:44:27 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:04.605 08:44:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.605 08:44:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.605 08:44:27 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:04.605 08:44:27 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:04.605 08:44:27 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.605 08:44:27 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:04.605 08:44:27 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:04.605 08:44:27 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:04.605 08:44:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.605 08:44:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.605 ************************************ 00:06:04.605 START TEST accel_missing_filename 00:06:04.605 ************************************ 00:06:04.605 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:04.605 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:04.605 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:04.605 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:04.605 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:04.605 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:04.605 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:04.605 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:04.605 08:44:27 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:04.866 [2024-06-09 08:44:27.178432] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:04.866 [2024-06-09 08:44:27.178511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368838 ] 00:06:04.866 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.866 [2024-06-09 08:44:27.242771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.866 [2024-06-09 08:44:27.315956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.866 [2024-06-09 08:44:27.348218] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.866 [2024-06-09 08:44:27.385070] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:05.126 A filename is required. 00:06:05.126 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:05.126 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.126 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:05.126 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:05.126 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:05.126 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.126 00:06:05.126 real 0m0.290s 00:06:05.126 user 0m0.231s 00:06:05.126 sys 0m0.098s 00:06:05.126 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.126 08:44:27 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:05.126 ************************************ 00:06:05.126 END TEST accel_missing_filename 00:06:05.126 ************************************ 00:06:05.126 08:44:27 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.126 08:44:27 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:05.126 08:44:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.126 08:44:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.126 ************************************ 00:06:05.126 START TEST accel_compress_verify 00:06:05.126 ************************************ 00:06:05.126 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.126 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:05.126 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.126 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:05.126 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.126 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:05.126 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.126 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.126 08:44:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.126 08:44:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:05.126 08:44:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.126 08:44:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.126 08:44:27 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.127 08:44:27 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.127 08:44:27 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.127 08:44:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:05.127 08:44:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:05.127 [2024-06-09 08:44:27.544754] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:05.127 [2024-06-09 08:44:27.544817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368863 ] 00:06:05.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.127 [2024-06-09 08:44:27.606792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.127 [2024-06-09 08:44:27.674941] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.387 [2024-06-09 08:44:27.706798] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.387 [2024-06-09 08:44:27.743558] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:05.387 00:06:05.387 Compression does not support the verify option, aborting. 00:06:05.387 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:05.387 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.387 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:05.387 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:05.387 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:05.387 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.387 00:06:05.387 real 0m0.283s 00:06:05.387 user 0m0.219s 00:06:05.387 sys 0m0.105s 00:06:05.387 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.387 08:44:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:05.387 ************************************ 00:06:05.387 END TEST accel_compress_verify 00:06:05.387 ************************************ 00:06:05.387 08:44:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:05.387 08:44:27 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:05.387 08:44:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.387 08:44:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.387 ************************************ 00:06:05.387 START TEST accel_wrong_workload 00:06:05.387 ************************************ 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:05.387 08:44:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:05.387 Unsupported workload type: foobar 00:06:05.387 [2024-06-09 08:44:27.900392] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:05.387 accel_perf options: 00:06:05.387 [-h help message] 00:06:05.387 [-q queue depth per core] 00:06:05.387 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.387 [-T number of threads per core 00:06:05.387 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.387 [-t time in seconds] 00:06:05.387 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.387 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:05.387 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.387 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.387 [-S for crc32c workload, use this seed value (default 0) 00:06:05.387 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.387 [-f for fill workload, use this BYTE value (default 255) 00:06:05.387 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.387 [-y verify result if this switch is on] 00:06:05.387 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.387 Can be used to spread operations across a wider range of memory. 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.387 00:06:05.387 real 0m0.038s 00:06:05.387 user 0m0.028s 00:06:05.387 sys 0m0.010s 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.387 08:44:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:05.387 ************************************ 00:06:05.387 END TEST accel_wrong_workload 00:06:05.387 ************************************ 00:06:05.387 Error: writing output failed: Broken pipe 00:06:05.387 08:44:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.388 08:44:27 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:05.388 08:44:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.388 08:44:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.648 ************************************ 00:06:05.648 START TEST accel_negative_buffers 00:06:05.648 ************************************ 00:06:05.648 08:44:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.648 08:44:27 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:05.648 08:44:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:05.648 08:44:27 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:05.648 08:44:27 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.648 08:44:27 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:05.648 08:44:27 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.648 08:44:27 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:05.648 08:44:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:05.648 -x option must be non-negative. 00:06:05.648 [2024-06-09 08:44:28.012096] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:05.648 accel_perf options: 00:06:05.648 [-h help message] 00:06:05.648 [-q queue depth per core] 00:06:05.648 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.648 [-T number of threads per core 00:06:05.648 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.648 [-t time in seconds] 00:06:05.648 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.648 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:05.648 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.648 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.648 [-S for crc32c workload, use this seed value (default 0) 00:06:05.648 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.648 [-f for fill workload, use this BYTE value (default 255) 00:06:05.648 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.648 [-y verify result if this switch is on] 00:06:05.648 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.648 Can be used to spread operations across a wider range of memory. 00:06:05.648 08:44:28 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:05.648 08:44:28 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.648 08:44:28 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:05.648 08:44:28 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.648 00:06:05.648 real 0m0.037s 00:06:05.648 user 0m0.019s 00:06:05.648 sys 0m0.017s 00:06:05.648 08:44:28 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.648 08:44:28 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:05.648 ************************************ 00:06:05.648 END TEST accel_negative_buffers 00:06:05.648 ************************************ 00:06:05.648 Error: writing output failed: Broken pipe 00:06:05.648 08:44:28 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:05.648 08:44:28 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:05.648 08:44:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.648 08:44:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.648 ************************************ 00:06:05.648 START TEST accel_crc32c 00:06:05.648 ************************************ 00:06:05.648 08:44:28 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:05.648 08:44:28 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:05.648 [2024-06-09 08:44:28.122311] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:05.648 [2024-06-09 08:44:28.122369] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368947 ] 00:06:05.648 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.648 [2024-06-09 08:44:28.182970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.909 [2024-06-09 08:44:28.248917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.909 08:44:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:06.851 08:44:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.851 00:06:06.851 real 0m1.284s 00:06:06.851 user 0m1.197s 00:06:06.851 sys 0m0.098s 00:06:06.851 08:44:29 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.851 08:44:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:06.851 ************************************ 00:06:06.851 END TEST accel_crc32c 00:06:06.851 ************************************ 00:06:07.112 08:44:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:07.112 08:44:29 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:07.112 08:44:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:07.112 08:44:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.112 ************************************ 00:06:07.112 START TEST accel_crc32c_C2 00:06:07.112 ************************************ 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:07.112 [2024-06-09 08:44:29.482626] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:07.112 [2024-06-09 08:44:29.482718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369277 ] 00:06:07.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.112 [2024-06-09 08:44:29.544624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.112 [2024-06-09 08:44:29.611770] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.112 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.113 08:44:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.499 00:06:08.499 real 0m1.287s 00:06:08.499 user 0m1.198s 00:06:08.499 sys 0m0.100s 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.499 08:44:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:08.499 ************************************ 00:06:08.499 END TEST accel_crc32c_C2 00:06:08.499 ************************************ 00:06:08.499 08:44:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:08.499 08:44:30 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:08.499 08:44:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.499 08:44:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.499 ************************************ 00:06:08.499 START TEST accel_copy 00:06:08.499 ************************************ 00:06:08.499 08:44:30 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:08.499 08:44:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:08.499 [2024-06-09 08:44:30.846470] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:08.499 [2024-06-09 08:44:30.846563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369629 ] 00:06:08.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.499 [2024-06-09 08:44:30.908565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.499 [2024-06-09 08:44:30.978108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.499 08:44:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:09.884 08:44:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.884 00:06:09.884 real 0m1.290s 00:06:09.884 user 0m1.196s 00:06:09.884 sys 0m0.103s 00:06:09.884 08:44:32 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.884 08:44:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.884 ************************************ 00:06:09.884 END TEST accel_copy 00:06:09.884 ************************************ 00:06:09.884 08:44:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.884 08:44:32 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:09.884 08:44:32 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.884 08:44:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.884 ************************************ 00:06:09.884 START TEST accel_fill 00:06:09.884 ************************************ 00:06:09.884 08:44:32 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:09.884 [2024-06-09 08:44:32.211245] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:09.884 [2024-06-09 08:44:32.211305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369982 ] 00:06:09.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.884 [2024-06-09 08:44:32.272392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.884 [2024-06-09 08:44:32.339313] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.884 08:44:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:11.306 08:44:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.306 00:06:11.306 real 0m1.287s 00:06:11.306 user 0m1.201s 00:06:11.306 sys 0m0.098s 00:06:11.306 08:44:33 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.306 08:44:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:11.306 ************************************ 00:06:11.306 END TEST accel_fill 00:06:11.306 ************************************ 00:06:11.306 08:44:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:11.306 08:44:33 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:11.306 08:44:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.306 08:44:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.306 ************************************ 00:06:11.306 START TEST accel_copy_crc32c 00:06:11.306 ************************************ 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:11.306 [2024-06-09 08:44:33.573304] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:11.306 [2024-06-09 08:44:33.573370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370200 ] 00:06:11.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.306 [2024-06-09 08:44:33.635655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.306 [2024-06-09 08:44:33.705773] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:11.306 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.307 08:44:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.693 00:06:12.693 real 0m1.289s 00:06:12.693 user 0m1.201s 00:06:12.693 sys 0m0.101s 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.693 08:44:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:12.693 ************************************ 00:06:12.693 END TEST accel_copy_crc32c 00:06:12.693 ************************************ 00:06:12.693 08:44:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.693 08:44:34 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:12.693 08:44:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.693 08:44:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.693 ************************************ 00:06:12.693 START TEST accel_copy_crc32c_C2 00:06:12.693 ************************************ 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.693 08:44:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:12.693 [2024-06-09 08:44:34.940017] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:12.693 [2024-06-09 08:44:34.940082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370389 ] 00:06:12.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.693 [2024-06-09 08:44:35.002247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.693 [2024-06-09 08:44:35.072520] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.693 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.694 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.694 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.694 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.694 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.694 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.694 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.694 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.694 08:44:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.080 00:06:14.080 real 0m1.289s 00:06:14.080 user 0m1.195s 00:06:14.080 sys 0m0.105s 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.080 08:44:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:14.080 ************************************ 00:06:14.080 END TEST accel_copy_crc32c_C2 00:06:14.080 ************************************ 00:06:14.080 08:44:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:14.080 08:44:36 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:14.080 08:44:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.080 08:44:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.080 ************************************ 00:06:14.080 START TEST accel_dualcast 00:06:14.080 ************************************ 00:06:14.080 08:44:36 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:14.080 [2024-06-09 08:44:36.306313] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:14.080 [2024-06-09 08:44:36.306397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370725 ] 00:06:14.080 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.080 [2024-06-09 08:44:36.367488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.080 [2024-06-09 08:44:36.433511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.081 08:44:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.023 08:44:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.023 08:44:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.023 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.023 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.023 08:44:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.023 08:44:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.023 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:15.024 08:44:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.024 00:06:15.024 real 0m1.285s 00:06:15.024 user 0m1.200s 00:06:15.024 sys 0m0.095s 00:06:15.024 08:44:37 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.024 08:44:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:15.024 ************************************ 00:06:15.024 END TEST accel_dualcast 00:06:15.024 ************************************ 00:06:15.285 08:44:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:15.285 08:44:37 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:15.285 08:44:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.285 08:44:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.285 ************************************ 00:06:15.285 START TEST accel_compare 00:06:15.285 ************************************ 00:06:15.285 08:44:37 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:15.285 [2024-06-09 08:44:37.671710] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:15.285 [2024-06-09 08:44:37.671814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371072 ] 00:06:15.285 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.285 [2024-06-09 08:44:37.736033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.285 [2024-06-09 08:44:37.800252] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.285 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.546 08:44:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:16.490 08:44:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.490 00:06:16.490 real 0m1.288s 00:06:16.490 user 0m1.200s 00:06:16.490 sys 0m0.099s 00:06:16.490 08:44:38 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:16.490 08:44:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:16.490 ************************************ 00:06:16.490 END TEST accel_compare 00:06:16.490 ************************************ 00:06:16.490 08:44:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:16.490 08:44:38 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:16.490 08:44:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:16.490 08:44:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.490 ************************************ 00:06:16.490 START TEST accel_xor 00:06:16.490 ************************************ 00:06:16.490 08:44:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:16.490 08:44:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:16.490 [2024-06-09 08:44:39.030761] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:16.490 [2024-06-09 08:44:39.030823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371421 ] 00:06:16.752 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.752 [2024-06-09 08:44:39.091357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.752 [2024-06-09 08:44:39.154516] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.752 08:44:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.139 00:06:18.139 real 0m1.280s 00:06:18.139 user 0m1.194s 00:06:18.139 sys 0m0.097s 00:06:18.139 08:44:40 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.139 08:44:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:18.139 ************************************ 00:06:18.139 END TEST accel_xor 00:06:18.139 ************************************ 00:06:18.139 08:44:40 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:18.139 08:44:40 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:18.139 08:44:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.139 08:44:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.139 ************************************ 00:06:18.139 START TEST accel_xor 00:06:18.139 ************************************ 00:06:18.139 08:44:40 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:18.139 08:44:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:18.140 [2024-06-09 08:44:40.388941] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:18.140 [2024-06-09 08:44:40.389007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371656 ] 00:06:18.140 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.140 [2024-06-09 08:44:40.451478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.140 [2024-06-09 08:44:40.522265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.140 08:44:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:19.526 08:44:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.526 00:06:19.526 real 0m1.292s 00:06:19.526 user 0m1.197s 00:06:19.526 sys 0m0.105s 00:06:19.526 08:44:41 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:19.526 08:44:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:19.526 ************************************ 00:06:19.526 END TEST accel_xor 00:06:19.526 ************************************ 00:06:19.526 08:44:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:19.526 08:44:41 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:19.526 08:44:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:19.526 08:44:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.526 ************************************ 00:06:19.526 START TEST accel_dif_verify 00:06:19.526 ************************************ 00:06:19.526 08:44:41 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:19.526 [2024-06-09 08:44:41.757688] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:19.526 [2024-06-09 08:44:41.757785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371846 ] 00:06:19.526 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.526 [2024-06-09 08:44:41.819179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.526 [2024-06-09 08:44:41.885913] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:19.526 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.527 08:44:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:20.470 08:44:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.470 00:06:20.470 real 0m1.287s 00:06:20.470 user 0m1.195s 00:06:20.470 sys 0m0.105s 00:06:20.470 08:44:43 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.470 08:44:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:20.470 ************************************ 00:06:20.470 END TEST accel_dif_verify 00:06:20.470 ************************************ 00:06:20.732 08:44:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:20.732 08:44:43 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:20.732 08:44:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.732 08:44:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.732 ************************************ 00:06:20.732 START TEST accel_dif_generate 00:06:20.732 ************************************ 00:06:20.732 08:44:43 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:20.732 [2024-06-09 08:44:43.121761] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:20.732 [2024-06-09 08:44:43.121856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372160 ] 00:06:20.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.732 [2024-06-09 08:44:43.183046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.732 [2024-06-09 08:44:43.247943] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.732 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.993 08:44:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:21.936 08:44:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.936 00:06:21.936 real 0m1.285s 00:06:21.936 user 0m1.198s 00:06:21.936 sys 0m0.099s 00:06:21.936 08:44:44 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.936 08:44:44 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:21.936 ************************************ 00:06:21.936 END TEST accel_dif_generate 00:06:21.936 ************************************ 00:06:21.936 08:44:44 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:21.936 08:44:44 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:21.936 08:44:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.936 08:44:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.936 ************************************ 00:06:21.936 START TEST accel_dif_generate_copy 00:06:21.936 ************************************ 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:21.936 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:21.936 [2024-06-09 08:44:44.479203] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:21.936 [2024-06-09 08:44:44.479264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372513 ] 00:06:22.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.197 [2024-06-09 08:44:44.540099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.197 [2024-06-09 08:44:44.602687] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.197 08:44:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.582 00:06:23.582 real 0m1.280s 00:06:23.582 user 0m1.190s 00:06:23.582 sys 0m0.101s 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:23.582 08:44:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:23.582 ************************************ 00:06:23.582 END TEST accel_dif_generate_copy 00:06:23.582 ************************************ 00:06:23.582 08:44:45 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:23.582 08:44:45 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.582 08:44:45 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:23.583 08:44:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.583 08:44:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.583 ************************************ 00:06:23.583 START TEST accel_comp 00:06:23.583 ************************************ 00:06:23.583 08:44:45 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:23.583 08:44:45 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:23.583 [2024-06-09 08:44:45.835069] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:23.583 [2024-06-09 08:44:45.835148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372864 ] 00:06:23.583 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.583 [2024-06-09 08:44:45.897991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.583 [2024-06-09 08:44:45.966363] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.583 08:44:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.969 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:24.970 08:44:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.970 00:06:24.970 real 0m1.291s 00:06:24.970 user 0m1.200s 00:06:24.970 sys 0m0.104s 00:06:24.970 08:44:47 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.970 08:44:47 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:24.970 ************************************ 00:06:24.970 END TEST accel_comp 00:06:24.970 ************************************ 00:06:24.970 08:44:47 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.970 08:44:47 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:24.970 08:44:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.970 08:44:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.970 ************************************ 00:06:24.970 START TEST accel_decomp 00:06:24.970 ************************************ 00:06:24.970 08:44:47 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:24.970 [2024-06-09 08:44:47.202243] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:24.970 [2024-06-09 08:44:47.202306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373166 ] 00:06:24.970 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.970 [2024-06-09 08:44:47.264320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.970 [2024-06-09 08:44:47.332349] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.970 08:44:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.913 08:44:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.913 00:06:25.913 real 0m1.289s 00:06:25.913 user 0m1.199s 00:06:25.913 sys 0m0.102s 00:06:25.913 08:44:48 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.913 08:44:48 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:25.913 ************************************ 00:06:25.913 END TEST accel_decomp 00:06:25.913 ************************************ 00:06:26.174 08:44:48 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.174 08:44:48 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:26.174 08:44:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.174 08:44:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.174 ************************************ 00:06:26.174 START TEST accel_decomp_full 00:06:26.174 ************************************ 00:06:26.174 08:44:48 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:26.174 08:44:48 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:26.174 [2024-06-09 08:44:48.570480] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:26.174 [2024-06-09 08:44:48.570552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373363 ] 00:06:26.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.174 [2024-06-09 08:44:48.634934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.174 [2024-06-09 08:44:48.704587] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.435 08:44:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.436 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.436 08:44:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.378 08:44:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.378 00:06:27.378 real 0m1.311s 00:06:27.378 user 0m1.228s 00:06:27.378 sys 0m0.096s 00:06:27.378 08:44:49 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.378 08:44:49 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:27.378 ************************************ 00:06:27.378 END TEST accel_decomp_full 00:06:27.378 ************************************ 00:06:27.378 08:44:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:27.378 08:44:49 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:27.378 08:44:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.378 08:44:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.378 ************************************ 00:06:27.378 START TEST accel_decomp_mcore 00:06:27.378 ************************************ 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:27.378 08:44:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:27.670 [2024-06-09 08:44:49.951194] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:27.670 [2024-06-09 08:44:49.951257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373601 ] 00:06:27.670 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.670 [2024-06-09 08:44:50.013965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.670 [2024-06-09 08:44:50.086155] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.670 [2024-06-09 08:44:50.086276] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.670 [2024-06-09 08:44:50.086455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.670 [2024-06-09 08:44:50.086686] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:27.670 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.671 08:44:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.052 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.053 00:06:29.053 real 0m1.303s 00:06:29.053 user 0m4.433s 00:06:29.053 sys 0m0.118s 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.053 08:44:51 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:29.053 ************************************ 00:06:29.053 END TEST accel_decomp_mcore 00:06:29.053 ************************************ 00:06:29.053 08:44:51 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.053 08:44:51 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:29.053 08:44:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.053 08:44:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.053 ************************************ 00:06:29.053 START TEST accel_decomp_full_mcore 00:06:29.053 ************************************ 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:29.053 [2024-06-09 08:44:51.330463] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:29.053 [2024-06-09 08:44:51.330543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373958 ] 00:06:29.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.053 [2024-06-09 08:44:51.393281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.053 [2024-06-09 08:44:51.463756] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.053 [2024-06-09 08:44:51.463871] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.053 [2024-06-09 08:44:51.464027] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.053 [2024-06-09 08:44:51.464027] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.053 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.054 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.054 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.054 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.054 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.054 08:44:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.437 00:06:30.437 real 0m1.315s 00:06:30.437 user 0m4.499s 00:06:30.437 sys 0m0.105s 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.437 08:44:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:30.437 ************************************ 00:06:30.437 END TEST accel_decomp_full_mcore 00:06:30.437 ************************************ 00:06:30.437 08:44:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:30.437 08:44:52 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:30.437 08:44:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.437 08:44:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.437 ************************************ 00:06:30.437 START TEST accel_decomp_mthread 00:06:30.437 ************************************ 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:30.437 [2024-06-09 08:44:52.721529] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:30.437 [2024-06-09 08:44:52.721619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374314 ] 00:06:30.437 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.437 [2024-06-09 08:44:52.783170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.437 [2024-06-09 08:44:52.847695] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.437 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.438 08:44:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.822 00:06:31.822 real 0m1.291s 00:06:31.822 user 0m1.200s 00:06:31.822 sys 0m0.102s 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:31.822 08:44:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:31.822 ************************************ 00:06:31.822 END TEST accel_decomp_mthread 00:06:31.822 ************************************ 00:06:31.822 08:44:54 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.822 08:44:54 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:31.822 08:44:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.822 08:44:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.822 ************************************ 00:06:31.822 START TEST accel_decomp_full_mthread 00:06:31.822 ************************************ 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.822 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:31.823 [2024-06-09 08:44:54.086379] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:31.823 [2024-06-09 08:44:54.086450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374663 ] 00:06:31.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.823 [2024-06-09 08:44:54.147334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.823 [2024-06-09 08:44:54.214773] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.823 08:44:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.209 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.210 00:06:33.210 real 0m1.321s 00:06:33.210 user 0m1.229s 00:06:33.210 sys 0m0.104s 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.210 08:44:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:33.210 ************************************ 00:06:33.210 END TEST accel_decomp_full_mthread 00:06:33.210 ************************************ 00:06:33.210 08:44:55 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:33.210 08:44:55 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:33.210 08:44:55 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:33.210 08:44:55 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:33.210 08:44:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.210 08:44:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.210 08:44:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.210 08:44:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.210 08:44:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.210 08:44:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.210 08:44:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.210 08:44:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:33.210 08:44:55 accel -- accel/accel.sh@41 -- # jq -r . 00:06:33.210 ************************************ 00:06:33.210 START TEST accel_dif_functional_tests 00:06:33.210 ************************************ 00:06:33.210 08:44:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:33.210 [2024-06-09 08:44:55.503382] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:33.210 [2024-06-09 08:44:55.503445] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374905 ] 00:06:33.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.210 [2024-06-09 08:44:55.567379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.210 [2024-06-09 08:44:55.641447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.210 [2024-06-09 08:44:55.641508] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.210 [2024-06-09 08:44:55.641510] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.210 00:06:33.210 00:06:33.210 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.210 http://cunit.sourceforge.net/ 00:06:33.210 00:06:33.210 00:06:33.210 Suite: accel_dif 00:06:33.210 Test: verify: DIF generated, GUARD check ...passed 00:06:33.210 Test: verify: DIF generated, APPTAG check ...passed 00:06:33.210 Test: verify: DIF generated, REFTAG check ...passed 00:06:33.210 Test: verify: DIF not generated, GUARD check ...[2024-06-09 08:44:55.697359] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:33.210 passed 00:06:33.210 Test: verify: DIF not generated, APPTAG check ...[2024-06-09 08:44:55.697406] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:33.210 passed 00:06:33.210 Test: verify: DIF not generated, REFTAG check ...[2024-06-09 08:44:55.697427] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:33.210 passed 00:06:33.210 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:33.210 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-09 08:44:55.697474] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:33.210 passed 00:06:33.210 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:33.210 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:33.210 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:33.210 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-09 08:44:55.697588] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:33.210 passed 00:06:33.210 Test: verify copy: DIF generated, GUARD check ...passed 00:06:33.210 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:33.210 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:33.210 Test: verify copy: DIF not generated, GUARD check ...[2024-06-09 08:44:55.697708] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:33.210 passed 00:06:33.210 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-09 08:44:55.697730] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:33.210 passed 00:06:33.210 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-09 08:44:55.697750] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:33.210 passed 00:06:33.210 Test: generate copy: DIF generated, GUARD check ...passed 00:06:33.210 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:33.210 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:33.210 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:33.210 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:33.210 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:33.210 Test: generate copy: iovecs-len validate ...[2024-06-09 08:44:55.697935] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:33.210 passed 00:06:33.210 Test: generate copy: buffer alignment validate ...passed 00:06:33.210 00:06:33.210 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.210 suites 1 1 n/a 0 0 00:06:33.210 tests 26 26 26 0 0 00:06:33.210 asserts 115 115 115 0 n/a 00:06:33.210 00:06:33.210 Elapsed time = 0.002 seconds 00:06:33.471 00:06:33.471 real 0m0.355s 00:06:33.471 user 0m0.487s 00:06:33.471 sys 0m0.132s 00:06:33.471 08:44:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.471 08:44:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:33.471 ************************************ 00:06:33.471 END TEST accel_dif_functional_tests 00:06:33.471 ************************************ 00:06:33.471 00:06:33.471 real 0m30.078s 00:06:33.471 user 0m33.719s 00:06:33.471 sys 0m4.089s 00:06:33.471 08:44:55 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.471 08:44:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.471 ************************************ 00:06:33.471 END TEST accel 00:06:33.471 ************************************ 00:06:33.471 08:44:55 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:33.471 08:44:55 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:33.471 08:44:55 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.471 08:44:55 -- common/autotest_common.sh@10 -- # set +x 00:06:33.471 ************************************ 00:06:33.471 START TEST accel_rpc 00:06:33.471 ************************************ 00:06:33.471 08:44:55 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:33.471 * Looking for test storage... 00:06:33.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:33.471 08:44:56 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.471 08:44:56 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2375086 00:06:33.471 08:44:56 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2375086 00:06:33.471 08:44:56 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:33.471 08:44:56 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 2375086 ']' 00:06:33.471 08:44:56 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.471 08:44:56 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:33.471 08:44:56 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.471 08:44:56 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:33.471 08:44:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.732 [2024-06-09 08:44:56.076142] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:33.732 [2024-06-09 08:44:56.076190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375086 ] 00:06:33.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.732 [2024-06-09 08:44:56.136369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.732 [2024-06-09 08:44:56.201320] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.732 08:44:56 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:33.732 08:44:56 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:33.732 08:44:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:33.732 08:44:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:33.732 08:44:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:33.732 08:44:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:33.732 08:44:56 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:33.732 08:44:56 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:33.732 08:44:56 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.732 08:44:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.732 ************************************ 00:06:33.732 START TEST accel_assign_opcode 00:06:33.732 ************************************ 00:06:33.732 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.733 [2024-06-09 08:44:56.277807] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.733 [2024-06-09 08:44:56.285818] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.733 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.993 software 00:06:33.993 00:06:33.993 real 0m0.203s 00:06:33.993 user 0m0.048s 00:06:33.993 sys 0m0.007s 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.993 08:44:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.993 ************************************ 00:06:33.993 END TEST accel_assign_opcode 00:06:33.993 ************************************ 00:06:33.993 08:44:56 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2375086 00:06:33.993 08:44:56 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 2375086 ']' 00:06:33.993 08:44:56 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 2375086 00:06:33.993 08:44:56 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:33.993 08:44:56 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:33.993 08:44:56 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2375086 00:06:34.254 08:44:56 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:34.254 08:44:56 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:34.254 08:44:56 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2375086' 00:06:34.254 killing process with pid 2375086 00:06:34.254 08:44:56 accel_rpc -- common/autotest_common.sh@968 -- # kill 2375086 00:06:34.254 08:44:56 accel_rpc -- common/autotest_common.sh@973 -- # wait 2375086 00:06:34.254 00:06:34.254 real 0m0.846s 00:06:34.254 user 0m0.882s 00:06:34.254 sys 0m0.342s 00:06:34.254 08:44:56 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.254 08:44:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.254 ************************************ 00:06:34.254 END TEST accel_rpc 00:06:34.254 ************************************ 00:06:34.254 08:44:56 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.254 08:44:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:34.254 08:44:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.254 08:44:56 -- common/autotest_common.sh@10 -- # set +x 00:06:34.515 ************************************ 00:06:34.515 START TEST app_cmdline 00:06:34.515 ************************************ 00:06:34.515 08:44:56 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.515 * Looking for test storage... 00:06:34.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:34.515 08:44:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.515 08:44:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2375279 00:06:34.515 08:44:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2375279 00:06:34.515 08:44:56 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.515 08:44:56 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 2375279 ']' 00:06:34.515 08:44:56 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.515 08:44:56 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:34.515 08:44:56 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.515 08:44:56 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:34.515 08:44:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.515 [2024-06-09 08:44:57.005855] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:34.515 [2024-06-09 08:44:57.005925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375279 ] 00:06:34.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.515 [2024-06-09 08:44:57.071753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.776 [2024-06-09 08:44:57.146657] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.403 08:44:57 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:35.403 08:44:57 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:35.403 08:44:57 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:35.403 { 00:06:35.403 "version": "SPDK v24.09-pre git sha1 e55c9a812", 00:06:35.403 "fields": { 00:06:35.403 "major": 24, 00:06:35.403 "minor": 9, 00:06:35.403 "patch": 0, 00:06:35.403 "suffix": "-pre", 00:06:35.403 "commit": "e55c9a812" 00:06:35.403 } 00:06:35.403 } 00:06:35.403 08:44:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.403 08:44:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.403 08:44:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.403 08:44:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.403 08:44:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.403 08:44:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.403 08:44:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:35.403 08:44:57 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.403 08:44:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.403 08:44:57 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.665 08:44:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.665 08:44:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.665 08:44:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:35.665 08:44:57 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.665 request: 00:06:35.665 { 00:06:35.665 "method": "env_dpdk_get_mem_stats", 00:06:35.665 "req_id": 1 00:06:35.665 } 00:06:35.665 Got JSON-RPC error response 00:06:35.665 response: 00:06:35.665 { 00:06:35.665 "code": -32601, 00:06:35.665 "message": "Method not found" 00:06:35.665 } 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:35.665 08:44:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2375279 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 2375279 ']' 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 2375279 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2375279 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2375279' 00:06:35.665 killing process with pid 2375279 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@968 -- # kill 2375279 00:06:35.665 08:44:58 app_cmdline -- common/autotest_common.sh@973 -- # wait 2375279 00:06:35.926 00:06:35.926 real 0m1.578s 00:06:35.926 user 0m1.904s 00:06:35.926 sys 0m0.409s 00:06:35.926 08:44:58 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.926 08:44:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.926 ************************************ 00:06:35.926 END TEST app_cmdline 00:06:35.926 ************************************ 00:06:35.926 08:44:58 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:35.926 08:44:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:35.926 08:44:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.926 08:44:58 -- common/autotest_common.sh@10 -- # set +x 00:06:36.188 ************************************ 00:06:36.189 START TEST version 00:06:36.189 ************************************ 00:06:36.189 08:44:58 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:36.189 * Looking for test storage... 00:06:36.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.189 08:44:58 version -- app/version.sh@17 -- # get_header_version major 00:06:36.189 08:44:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.189 08:44:58 version -- app/version.sh@14 -- # cut -f2 00:06:36.189 08:44:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.189 08:44:58 version -- app/version.sh@17 -- # major=24 00:06:36.189 08:44:58 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.189 08:44:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.189 08:44:58 version -- app/version.sh@14 -- # cut -f2 00:06:36.189 08:44:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.189 08:44:58 version -- app/version.sh@18 -- # minor=9 00:06:36.189 08:44:58 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.189 08:44:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.189 08:44:58 version -- app/version.sh@14 -- # cut -f2 00:06:36.189 08:44:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.189 08:44:58 version -- app/version.sh@19 -- # patch=0 00:06:36.189 08:44:58 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.189 08:44:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.189 08:44:58 version -- app/version.sh@14 -- # cut -f2 00:06:36.189 08:44:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.189 08:44:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.189 08:44:58 version -- app/version.sh@22 -- # version=24.9 00:06:36.189 08:44:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.189 08:44:58 version -- app/version.sh@28 -- # version=24.9rc0 00:06:36.189 08:44:58 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.189 08:44:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.189 08:44:58 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:36.189 08:44:58 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:36.189 00:06:36.189 real 0m0.178s 00:06:36.189 user 0m0.088s 00:06:36.189 sys 0m0.131s 00:06:36.189 08:44:58 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.189 08:44:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.189 ************************************ 00:06:36.189 END TEST version 00:06:36.189 ************************************ 00:06:36.189 08:44:58 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:36.189 08:44:58 -- spdk/autotest.sh@198 -- # uname -s 00:06:36.189 08:44:58 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:36.189 08:44:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:36.189 08:44:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:36.189 08:44:58 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:36.189 08:44:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:36.189 08:44:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:36.189 08:44:58 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:36.189 08:44:58 -- common/autotest_common.sh@10 -- # set +x 00:06:36.451 08:44:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:36.451 08:44:58 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:36.451 08:44:58 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:36.451 08:44:58 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:36.451 08:44:58 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:36.451 08:44:58 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:36.451 08:44:58 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.451 08:44:58 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:36.451 08:44:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.451 08:44:58 -- common/autotest_common.sh@10 -- # set +x 00:06:36.451 ************************************ 00:06:36.451 START TEST nvmf_tcp 00:06:36.451 ************************************ 00:06:36.451 08:44:58 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.451 * Looking for test storage... 00:06:36.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:36.451 08:44:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.452 08:44:58 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.452 08:44:58 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.452 08:44:58 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.452 08:44:58 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.452 08:44:58 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.452 08:44:58 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.452 08:44:58 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:36.452 08:44:58 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:36.452 08:44:58 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:36.452 08:44:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:36.452 08:44:58 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:36.452 08:44:58 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:36.452 08:44:58 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.452 08:44:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.452 ************************************ 00:06:36.452 START TEST nvmf_example 00:06:36.452 ************************************ 00:06:36.452 08:44:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:36.713 * Looking for test storage... 00:06:36.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.713 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.714 08:44:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.305 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.305 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:43.305 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:43.305 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:43.305 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:43.306 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:43.306 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:43.306 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:43.306 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:43.306 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.568 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.568 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.568 08:45:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.568 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:43.568 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.568 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.568 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:43.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:06:43.829 00:06:43.829 --- 10.0.0.2 ping statistics --- 00:06:43.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.829 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.418 ms 00:06:43.829 00:06:43.829 --- 10.0.0.1 ping statistics --- 00:06:43.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.829 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2379697 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2379697 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 2379697 ']' 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:43.829 08:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:44.772 08:45:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:44.772 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.839 Initializing NVMe Controllers 00:06:54.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:54.839 Initialization complete. Launching workers. 00:06:54.839 ======================================================== 00:06:54.839 Latency(us) 00:06:54.839 Device Information : IOPS MiB/s Average min max 00:06:54.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14629.80 57.15 4376.55 891.59 15622.62 00:06:54.839 ======================================================== 00:06:54.839 Total : 14629.80 57.15 4376.55 891.59 15622.62 00:06:54.839 00:06:54.839 08:45:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:54.839 08:45:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:54.839 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:54.839 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:54.839 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:54.839 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:54.839 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:54.839 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.099 rmmod nvme_tcp 00:06:55.099 rmmod nvme_fabrics 00:06:55.099 rmmod nvme_keyring 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2379697 ']' 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2379697 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 2379697 ']' 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 2379697 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2379697 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:06:55.099 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2379697' 00:06:55.100 killing process with pid 2379697 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 2379697 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 2379697 00:06:55.100 nvmf threads initialize successfully 00:06:55.100 bdev subsystem init successfully 00:06:55.100 created a nvmf target service 00:06:55.100 create targets's poll groups done 00:06:55.100 all subsystems of target started 00:06:55.100 nvmf target is running 00:06:55.100 all subsystems of target stopped 00:06:55.100 destroy targets's poll groups done 00:06:55.100 destroyed the nvmf target service 00:06:55.100 bdev subsystem finish successfully 00:06:55.100 nvmf threads destroy successfully 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.100 08:45:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.653 08:45:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:57.653 08:45:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:57.653 08:45:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:57.653 08:45:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.653 00:06:57.653 real 0m20.758s 00:06:57.653 user 0m46.505s 00:06:57.653 sys 0m6.324s 00:06:57.653 08:45:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:57.653 08:45:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.653 ************************************ 00:06:57.653 END TEST nvmf_example 00:06:57.653 ************************************ 00:06:57.653 08:45:19 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:57.653 08:45:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:57.653 08:45:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:57.653 08:45:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.653 ************************************ 00:06:57.653 START TEST nvmf_filesystem 00:06:57.653 ************************************ 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:57.653 * Looking for test storage... 00:06:57.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:57.653 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:57.654 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:57.654 #define SPDK_CONFIG_H 00:06:57.654 #define SPDK_CONFIG_APPS 1 00:06:57.654 #define SPDK_CONFIG_ARCH native 00:06:57.654 #undef SPDK_CONFIG_ASAN 00:06:57.654 #undef SPDK_CONFIG_AVAHI 00:06:57.654 #undef SPDK_CONFIG_CET 00:06:57.654 #define SPDK_CONFIG_COVERAGE 1 00:06:57.654 #define SPDK_CONFIG_CROSS_PREFIX 00:06:57.654 #undef SPDK_CONFIG_CRYPTO 00:06:57.654 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:57.654 #undef SPDK_CONFIG_CUSTOMOCF 00:06:57.654 #undef SPDK_CONFIG_DAOS 00:06:57.654 #define SPDK_CONFIG_DAOS_DIR 00:06:57.654 #define SPDK_CONFIG_DEBUG 1 00:06:57.654 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:57.654 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:57.654 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:57.654 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:57.654 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:57.654 #undef SPDK_CONFIG_DPDK_UADK 00:06:57.654 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:57.654 #define SPDK_CONFIG_EXAMPLES 1 00:06:57.654 #undef SPDK_CONFIG_FC 00:06:57.654 #define SPDK_CONFIG_FC_PATH 00:06:57.654 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:57.654 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:57.654 #undef SPDK_CONFIG_FUSE 00:06:57.654 #undef SPDK_CONFIG_FUZZER 00:06:57.654 #define SPDK_CONFIG_FUZZER_LIB 00:06:57.654 #undef SPDK_CONFIG_GOLANG 00:06:57.654 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:57.654 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:57.654 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:57.654 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:57.654 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:57.654 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:57.654 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:57.654 #define SPDK_CONFIG_IDXD 1 00:06:57.654 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:57.654 #undef SPDK_CONFIG_IPSEC_MB 00:06:57.654 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:57.654 #define SPDK_CONFIG_ISAL 1 00:06:57.654 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:57.654 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:57.654 #define SPDK_CONFIG_LIBDIR 00:06:57.654 #undef SPDK_CONFIG_LTO 00:06:57.654 #define SPDK_CONFIG_MAX_LCORES 00:06:57.654 #define SPDK_CONFIG_NVME_CUSE 1 00:06:57.654 #undef SPDK_CONFIG_OCF 00:06:57.654 #define SPDK_CONFIG_OCF_PATH 00:06:57.654 #define SPDK_CONFIG_OPENSSL_PATH 00:06:57.654 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:57.654 #define SPDK_CONFIG_PGO_DIR 00:06:57.654 #undef SPDK_CONFIG_PGO_USE 00:06:57.655 #define SPDK_CONFIG_PREFIX /usr/local 00:06:57.655 #undef SPDK_CONFIG_RAID5F 00:06:57.655 #undef SPDK_CONFIG_RBD 00:06:57.655 #define SPDK_CONFIG_RDMA 1 00:06:57.655 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:57.655 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:57.655 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:57.655 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:57.655 #define SPDK_CONFIG_SHARED 1 00:06:57.655 #undef SPDK_CONFIG_SMA 00:06:57.655 #define SPDK_CONFIG_TESTS 1 00:06:57.655 #undef SPDK_CONFIG_TSAN 00:06:57.655 #define SPDK_CONFIG_UBLK 1 00:06:57.655 #define SPDK_CONFIG_UBSAN 1 00:06:57.655 #undef SPDK_CONFIG_UNIT_TESTS 00:06:57.655 #undef SPDK_CONFIG_URING 00:06:57.655 #define SPDK_CONFIG_URING_PATH 00:06:57.655 #undef SPDK_CONFIG_URING_ZNS 00:06:57.655 #undef SPDK_CONFIG_USDT 00:06:57.655 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:57.655 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:57.655 #undef SPDK_CONFIG_VFIO_USER 00:06:57.655 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:57.655 #define SPDK_CONFIG_VHOST 1 00:06:57.655 #define SPDK_CONFIG_VIRTIO 1 00:06:57.655 #undef SPDK_CONFIG_VTUNE 00:06:57.655 #define SPDK_CONFIG_VTUNE_DIR 00:06:57.655 #define SPDK_CONFIG_WERROR 1 00:06:57.655 #define SPDK_CONFIG_WPDK_DIR 00:06:57.655 #undef SPDK_CONFIG_XNVME 00:06:57.655 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:57.655 08:45:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:57.655 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:57.656 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2382959 ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2382959 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.LBfaKQ 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.LBfaKQ/tests/target /tmp/spdk.LBfaKQ 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956665856 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327763968 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118703632384 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10667347968 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864499200 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683995136 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1495040 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:57.657 * Looking for test storage... 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118703632384 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12881940480 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:57.657 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.658 08:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.805 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:05.806 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:05.806 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:05.806 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:05.806 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:05.806 08:45:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:05.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:07:05.806 00:07:05.806 --- 10.0.0.2 ping statistics --- 00:07:05.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.806 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:07:05.806 00:07:05.806 --- 10.0.0.1 ping statistics --- 00:07:05.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.806 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.806 ************************************ 00:07:05.806 START TEST nvmf_filesystem_no_in_capsule 00:07:05.806 ************************************ 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2386586 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2386586 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2386586 ']' 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.806 08:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.806 [2024-06-09 08:45:27.296233] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:05.807 [2024-06-09 08:45:27.296293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.807 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.807 [2024-06-09 08:45:27.366613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.807 [2024-06-09 08:45:27.444226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.807 [2024-06-09 08:45:27.444265] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.807 [2024-06-09 08:45:27.444272] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.807 [2024-06-09 08:45:27.444279] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.807 [2024-06-09 08:45:27.444284] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.807 [2024-06-09 08:45:27.444443] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.807 [2024-06-09 08:45:27.444583] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.807 [2024-06-09 08:45:27.444740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.807 [2024-06-09 08:45:27.444741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 [2024-06-09 08:45:28.122003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 Malloc1 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 [2024-06-09 08:45:28.251689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:05.807 { 00:07:05.807 "name": "Malloc1", 00:07:05.807 "aliases": [ 00:07:05.807 "89bf50ff-f072-4140-8252-43b63dee8a3a" 00:07:05.807 ], 00:07:05.807 "product_name": "Malloc disk", 00:07:05.807 "block_size": 512, 00:07:05.807 "num_blocks": 1048576, 00:07:05.807 "uuid": "89bf50ff-f072-4140-8252-43b63dee8a3a", 00:07:05.807 "assigned_rate_limits": { 00:07:05.807 "rw_ios_per_sec": 0, 00:07:05.807 "rw_mbytes_per_sec": 0, 00:07:05.807 "r_mbytes_per_sec": 0, 00:07:05.807 "w_mbytes_per_sec": 0 00:07:05.807 }, 00:07:05.807 "claimed": true, 00:07:05.807 "claim_type": "exclusive_write", 00:07:05.807 "zoned": false, 00:07:05.807 "supported_io_types": { 00:07:05.807 "read": true, 00:07:05.807 "write": true, 00:07:05.807 "unmap": true, 00:07:05.807 "write_zeroes": true, 00:07:05.807 "flush": true, 00:07:05.807 "reset": true, 00:07:05.807 "compare": false, 00:07:05.807 "compare_and_write": false, 00:07:05.807 "abort": true, 00:07:05.807 "nvme_admin": false, 00:07:05.807 "nvme_io": false 00:07:05.807 }, 00:07:05.807 "memory_domains": [ 00:07:05.807 { 00:07:05.807 "dma_device_id": "system", 00:07:05.807 "dma_device_type": 1 00:07:05.807 }, 00:07:05.807 { 00:07:05.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.807 "dma_device_type": 2 00:07:05.807 } 00:07:05.807 ], 00:07:05.807 "driver_specific": {} 00:07:05.807 } 00:07:05.807 ]' 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:05.807 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:06.069 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:06.069 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:06.069 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:06.069 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:06.069 08:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.455 08:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.455 08:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:07.455 08:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.455 08:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:07.455 08:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:09.369 08:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:09.942 08:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:10.203 08:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.588 ************************************ 00:07:11.588 START TEST filesystem_ext4 00:07:11.588 ************************************ 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:11.588 08:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:11.588 mke2fs 1.46.5 (30-Dec-2021) 00:07:11.588 Discarding device blocks: 0/522240 done 00:07:11.588 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:11.588 Filesystem UUID: 1d722ebd-590b-44cd-b2d4-7ec9ae786412 00:07:11.588 Superblock backups stored on blocks: 00:07:11.588 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:11.588 00:07:11.588 Allocating group tables: 0/64 done 00:07:11.588 Writing inode tables: 0/64 done 00:07:12.159 Creating journal (8192 blocks): done 00:07:12.992 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:12.992 00:07:12.992 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:12.992 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.254 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2386586 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.515 00:07:13.515 real 0m2.083s 00:07:13.515 user 0m0.030s 00:07:13.515 sys 0m0.070s 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:13.515 ************************************ 00:07:13.515 END TEST filesystem_ext4 00:07:13.515 ************************************ 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.515 ************************************ 00:07:13.515 START TEST filesystem_btrfs 00:07:13.515 ************************************ 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:13.515 08:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:13.775 btrfs-progs v6.6.2 00:07:13.775 See https://btrfs.readthedocs.io for more information. 00:07:13.775 00:07:13.775 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:13.775 NOTE: several default settings have changed in version 5.15, please make sure 00:07:13.775 this does not affect your deployments: 00:07:13.775 - DUP for metadata (-m dup) 00:07:13.775 - enabled no-holes (-O no-holes) 00:07:13.775 - enabled free-space-tree (-R free-space-tree) 00:07:13.775 00:07:13.775 Label: (null) 00:07:13.775 UUID: 6a9669e1-83d9-4fb5-85dc-646f2cf00ad1 00:07:13.775 Node size: 16384 00:07:13.775 Sector size: 4096 00:07:13.775 Filesystem size: 510.00MiB 00:07:13.775 Block group profiles: 00:07:13.775 Data: single 8.00MiB 00:07:13.775 Metadata: DUP 32.00MiB 00:07:13.775 System: DUP 8.00MiB 00:07:13.775 SSD detected: yes 00:07:13.775 Zoned device: no 00:07:13.775 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:13.775 Runtime features: free-space-tree 00:07:13.775 Checksum: crc32c 00:07:13.775 Number of devices: 1 00:07:13.775 Devices: 00:07:13.775 ID SIZE PATH 00:07:13.775 1 510.00MiB /dev/nvme0n1p1 00:07:13.775 00:07:13.775 08:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:13.775 08:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:14.718 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:14.718 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2386586 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:14.979 00:07:14.979 real 0m1.387s 00:07:14.979 user 0m0.026s 00:07:14.979 sys 0m0.130s 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:14.979 ************************************ 00:07:14.979 END TEST filesystem_btrfs 00:07:14.979 ************************************ 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.979 ************************************ 00:07:14.979 START TEST filesystem_xfs 00:07:14.979 ************************************ 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:14.979 08:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:14.979 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:14.979 = sectsz=512 attr=2, projid32bit=1 00:07:14.979 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:14.979 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:14.979 data = bsize=4096 blocks=130560, imaxpct=25 00:07:14.979 = sunit=0 swidth=0 blks 00:07:14.979 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:14.979 log =internal log bsize=4096 blocks=16384, version=2 00:07:14.979 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:14.979 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:15.924 Discarding blocks...Done. 00:07:15.924 08:45:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:15.924 08:45:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2386586 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.508 00:07:18.508 real 0m3.487s 00:07:18.508 user 0m0.018s 00:07:18.508 sys 0m0.084s 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:18.508 ************************************ 00:07:18.508 END TEST filesystem_xfs 00:07:18.508 ************************************ 00:07:18.508 08:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:18.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.768 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2386586 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2386586 ']' 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2386586 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2386586 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2386586' 00:07:18.769 killing process with pid 2386586 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 2386586 00:07:18.769 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 2386586 00:07:19.029 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:19.029 00:07:19.029 real 0m14.285s 00:07:19.029 user 0m56.354s 00:07:19.029 sys 0m1.204s 00:07:19.029 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:19.029 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.029 ************************************ 00:07:19.029 END TEST nvmf_filesystem_no_in_capsule 00:07:19.029 ************************************ 00:07:19.029 08:45:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:19.029 08:45:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:19.029 08:45:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:19.029 08:45:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.289 ************************************ 00:07:19.289 START TEST nvmf_filesystem_in_capsule 00:07:19.289 ************************************ 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2389682 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2389682 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2389682 ']' 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:19.289 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.290 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:19.290 08:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.290 [2024-06-09 08:45:41.656428] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:19.290 [2024-06-09 08:45:41.656475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.290 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.290 [2024-06-09 08:45:41.721673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.290 [2024-06-09 08:45:41.787645] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.290 [2024-06-09 08:45:41.787681] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.290 [2024-06-09 08:45:41.787688] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.290 [2024-06-09 08:45:41.787695] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.290 [2024-06-09 08:45:41.787701] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.290 [2024-06-09 08:45:41.788500] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.290 [2024-06-09 08:45:41.788757] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.290 [2024-06-09 08:45:41.788914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.290 [2024-06-09 08:45:41.788914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.229 [2024-06-09 08:45:42.476061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.229 Malloc1 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.229 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.230 [2024-06-09 08:45:42.600745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:20.230 { 00:07:20.230 "name": "Malloc1", 00:07:20.230 "aliases": [ 00:07:20.230 "69dcb3f8-a196-415e-b971-22029622904c" 00:07:20.230 ], 00:07:20.230 "product_name": "Malloc disk", 00:07:20.230 "block_size": 512, 00:07:20.230 "num_blocks": 1048576, 00:07:20.230 "uuid": "69dcb3f8-a196-415e-b971-22029622904c", 00:07:20.230 "assigned_rate_limits": { 00:07:20.230 "rw_ios_per_sec": 0, 00:07:20.230 "rw_mbytes_per_sec": 0, 00:07:20.230 "r_mbytes_per_sec": 0, 00:07:20.230 "w_mbytes_per_sec": 0 00:07:20.230 }, 00:07:20.230 "claimed": true, 00:07:20.230 "claim_type": "exclusive_write", 00:07:20.230 "zoned": false, 00:07:20.230 "supported_io_types": { 00:07:20.230 "read": true, 00:07:20.230 "write": true, 00:07:20.230 "unmap": true, 00:07:20.230 "write_zeroes": true, 00:07:20.230 "flush": true, 00:07:20.230 "reset": true, 00:07:20.230 "compare": false, 00:07:20.230 "compare_and_write": false, 00:07:20.230 "abort": true, 00:07:20.230 "nvme_admin": false, 00:07:20.230 "nvme_io": false 00:07:20.230 }, 00:07:20.230 "memory_domains": [ 00:07:20.230 { 00:07:20.230 "dma_device_id": "system", 00:07:20.230 "dma_device_type": 1 00:07:20.230 }, 00:07:20.230 { 00:07:20.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.230 "dma_device_type": 2 00:07:20.230 } 00:07:20.230 ], 00:07:20.230 "driver_specific": {} 00:07:20.230 } 00:07:20.230 ]' 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:20.230 08:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:22.141 08:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:22.141 08:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:22.141 08:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:22.141 08:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:22.141 08:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:24.061 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:24.322 08:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.266 ************************************ 00:07:25.266 START TEST filesystem_in_capsule_ext4 00:07:25.266 ************************************ 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:25.266 08:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:25.266 mke2fs 1.46.5 (30-Dec-2021) 00:07:25.266 Discarding device blocks: 0/522240 done 00:07:25.266 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:25.266 Filesystem UUID: c662b180-0678-4bea-bd46-056a6e8528a8 00:07:25.266 Superblock backups stored on blocks: 00:07:25.266 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:25.266 00:07:25.266 Allocating group tables: 0/64 done 00:07:25.266 Writing inode tables: 0/64 done 00:07:25.527 Creating journal (8192 blocks): done 00:07:26.469 Writing superblocks and filesystem accounting information: 0/64 done 00:07:26.469 00:07:26.469 08:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:26.469 08:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2389682 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.730 00:07:26.730 real 0m1.526s 00:07:26.730 user 0m0.031s 00:07:26.730 sys 0m0.065s 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.730 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:26.730 ************************************ 00:07:26.730 END TEST filesystem_in_capsule_ext4 00:07:26.730 ************************************ 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.991 ************************************ 00:07:26.991 START TEST filesystem_in_capsule_btrfs 00:07:26.991 ************************************ 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:26.991 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:27.252 btrfs-progs v6.6.2 00:07:27.252 See https://btrfs.readthedocs.io for more information. 00:07:27.252 00:07:27.252 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:27.252 NOTE: several default settings have changed in version 5.15, please make sure 00:07:27.252 this does not affect your deployments: 00:07:27.252 - DUP for metadata (-m dup) 00:07:27.252 - enabled no-holes (-O no-holes) 00:07:27.252 - enabled free-space-tree (-R free-space-tree) 00:07:27.252 00:07:27.252 Label: (null) 00:07:27.252 UUID: eb3b5d17-99e8-454f-8a74-46c95ac16b1c 00:07:27.252 Node size: 16384 00:07:27.252 Sector size: 4096 00:07:27.252 Filesystem size: 510.00MiB 00:07:27.252 Block group profiles: 00:07:27.252 Data: single 8.00MiB 00:07:27.252 Metadata: DUP 32.00MiB 00:07:27.252 System: DUP 8.00MiB 00:07:27.252 SSD detected: yes 00:07:27.252 Zoned device: no 00:07:27.252 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:27.252 Runtime features: free-space-tree 00:07:27.252 Checksum: crc32c 00:07:27.252 Number of devices: 1 00:07:27.252 Devices: 00:07:27.252 ID SIZE PATH 00:07:27.252 1 510.00MiB /dev/nvme0n1p1 00:07:27.252 00:07:27.252 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:27.252 08:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.196 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.196 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:28.196 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.196 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:28.196 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:28.196 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2389682 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.457 00:07:28.457 real 0m1.432s 00:07:28.457 user 0m0.025s 00:07:28.457 sys 0m0.137s 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:28.457 ************************************ 00:07:28.457 END TEST filesystem_in_capsule_btrfs 00:07:28.457 ************************************ 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.457 ************************************ 00:07:28.457 START TEST filesystem_in_capsule_xfs 00:07:28.457 ************************************ 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:28.457 08:45:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:28.457 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:28.457 = sectsz=512 attr=2, projid32bit=1 00:07:28.457 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:28.457 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:28.457 data = bsize=4096 blocks=130560, imaxpct=25 00:07:28.458 = sunit=0 swidth=0 blks 00:07:28.458 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:28.458 log =internal log bsize=4096 blocks=16384, version=2 00:07:28.458 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:28.458 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:29.400 Discarding blocks...Done. 00:07:29.400 08:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:29.400 08:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2389682 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.316 00:07:31.316 real 0m2.773s 00:07:31.316 user 0m0.025s 00:07:31.316 sys 0m0.076s 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:31.316 ************************************ 00:07:31.316 END TEST filesystem_in_capsule_xfs 00:07:31.316 ************************************ 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:31.316 08:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2389682 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2389682 ']' 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2389682 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2389682 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2389682' 00:07:31.888 killing process with pid 2389682 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 2389682 00:07:31.888 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 2389682 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:32.149 00:07:32.149 real 0m12.996s 00:07:32.149 user 0m51.232s 00:07:32.149 sys 0m1.229s 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.149 ************************************ 00:07:32.149 END TEST nvmf_filesystem_in_capsule 00:07:32.149 ************************************ 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.149 rmmod nvme_tcp 00:07:32.149 rmmod nvme_fabrics 00:07:32.149 rmmod nvme_keyring 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:32.149 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.410 08:45:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.320 08:45:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.320 00:07:34.320 real 0m36.955s 00:07:34.320 user 1m49.756s 00:07:34.320 sys 0m7.862s 00:07:34.320 08:45:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:34.321 08:45:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.321 ************************************ 00:07:34.321 END TEST nvmf_filesystem 00:07:34.321 ************************************ 00:07:34.321 08:45:56 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:34.321 08:45:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:34.321 08:45:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:34.321 08:45:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.321 ************************************ 00:07:34.321 START TEST nvmf_target_discovery 00:07:34.321 ************************************ 00:07:34.321 08:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:34.635 * Looking for test storage... 00:07:34.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.635 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.636 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.636 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.636 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.636 08:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.636 08:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.227 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.227 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:41.228 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:41.228 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:41.228 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:41.228 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.228 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.489 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.489 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.489 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:07:41.490 00:07:41.490 --- 10.0.0.2 ping statistics --- 00:07:41.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.490 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:07:41.490 00:07:41.490 --- 10.0.0.1 ping statistics --- 00:07:41.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.490 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.490 08:46:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2396599 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2396599 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 2396599 ']' 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:41.490 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.750 [2024-06-09 08:46:04.080363] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:41.750 [2024-06-09 08:46:04.080433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.750 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.750 [2024-06-09 08:46:04.151200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.750 [2024-06-09 08:46:04.227252] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.750 [2024-06-09 08:46:04.227290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.750 [2024-06-09 08:46:04.227301] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.750 [2024-06-09 08:46:04.227308] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.750 [2024-06-09 08:46:04.227313] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.750 [2024-06-09 08:46:04.227451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.750 [2024-06-09 08:46:04.227507] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.750 [2024-06-09 08:46:04.227673] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.750 [2024-06-09 08:46:04.227674] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.321 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:42.321 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:07:42.321 08:46:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.321 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:42.321 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.581 [2024-06-09 08:46:04.917062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.581 Null1 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.581 [2024-06-09 08:46:04.977371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:42.581 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 Null2 00:07:42.582 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:42.582 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 Null3 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 Null4 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.582 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.843 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.843 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:07:42.843 00:07:42.843 Discovery Log Number of Records 6, Generation counter 6 00:07:42.843 =====Discovery Log Entry 0====== 00:07:42.843 trtype: tcp 00:07:42.843 adrfam: ipv4 00:07:42.843 subtype: current discovery subsystem 00:07:42.843 treq: not required 00:07:42.843 portid: 0 00:07:42.843 trsvcid: 4420 00:07:42.843 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:42.843 traddr: 10.0.0.2 00:07:42.843 eflags: explicit discovery connections, duplicate discovery information 00:07:42.843 sectype: none 00:07:42.843 =====Discovery Log Entry 1====== 00:07:42.843 trtype: tcp 00:07:42.843 adrfam: ipv4 00:07:42.843 subtype: nvme subsystem 00:07:42.843 treq: not required 00:07:42.843 portid: 0 00:07:42.843 trsvcid: 4420 00:07:42.843 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:42.843 traddr: 10.0.0.2 00:07:42.843 eflags: none 00:07:42.843 sectype: none 00:07:42.843 =====Discovery Log Entry 2====== 00:07:42.843 trtype: tcp 00:07:42.843 adrfam: ipv4 00:07:42.843 subtype: nvme subsystem 00:07:42.843 treq: not required 00:07:42.843 portid: 0 00:07:42.843 trsvcid: 4420 00:07:42.843 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:42.843 traddr: 10.0.0.2 00:07:42.843 eflags: none 00:07:42.843 sectype: none 00:07:42.843 =====Discovery Log Entry 3====== 00:07:42.843 trtype: tcp 00:07:42.843 adrfam: ipv4 00:07:42.843 subtype: nvme subsystem 00:07:42.843 treq: not required 00:07:42.843 portid: 0 00:07:42.843 trsvcid: 4420 00:07:42.843 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:42.843 traddr: 10.0.0.2 00:07:42.843 eflags: none 00:07:42.843 sectype: none 00:07:42.843 =====Discovery Log Entry 4====== 00:07:42.843 trtype: tcp 00:07:42.843 adrfam: ipv4 00:07:42.843 subtype: nvme subsystem 00:07:42.843 treq: not required 00:07:42.843 portid: 0 00:07:42.843 trsvcid: 4420 00:07:42.843 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:42.843 traddr: 10.0.0.2 00:07:42.843 eflags: none 00:07:42.843 sectype: none 00:07:42.843 =====Discovery Log Entry 5====== 00:07:42.843 trtype: tcp 00:07:42.843 adrfam: ipv4 00:07:42.843 subtype: discovery subsystem referral 00:07:42.843 treq: not required 00:07:42.843 portid: 0 00:07:42.843 trsvcid: 4430 00:07:42.843 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:42.843 traddr: 10.0.0.2 00:07:42.843 eflags: none 00:07:42.843 sectype: none 00:07:42.843 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:42.843 Perform nvmf subsystem discovery via RPC 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 [ 00:07:42.844 { 00:07:42.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:42.844 "subtype": "Discovery", 00:07:42.844 "listen_addresses": [ 00:07:42.844 { 00:07:42.844 "trtype": "TCP", 00:07:42.844 "adrfam": "IPv4", 00:07:42.844 "traddr": "10.0.0.2", 00:07:42.844 "trsvcid": "4420" 00:07:42.844 } 00:07:42.844 ], 00:07:42.844 "allow_any_host": true, 00:07:42.844 "hosts": [] 00:07:42.844 }, 00:07:42.844 { 00:07:42.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:42.844 "subtype": "NVMe", 00:07:42.844 "listen_addresses": [ 00:07:42.844 { 00:07:42.844 "trtype": "TCP", 00:07:42.844 "adrfam": "IPv4", 00:07:42.844 "traddr": "10.0.0.2", 00:07:42.844 "trsvcid": "4420" 00:07:42.844 } 00:07:42.844 ], 00:07:42.844 "allow_any_host": true, 00:07:42.844 "hosts": [], 00:07:42.844 "serial_number": "SPDK00000000000001", 00:07:42.844 "model_number": "SPDK bdev Controller", 00:07:42.844 "max_namespaces": 32, 00:07:42.844 "min_cntlid": 1, 00:07:42.844 "max_cntlid": 65519, 00:07:42.844 "namespaces": [ 00:07:42.844 { 00:07:42.844 "nsid": 1, 00:07:42.844 "bdev_name": "Null1", 00:07:42.844 "name": "Null1", 00:07:42.844 "nguid": "3753B299C264421F8F3A97EA5EAEB6B7", 00:07:42.844 "uuid": "3753b299-c264-421f-8f3a-97ea5eaeb6b7" 00:07:42.844 } 00:07:42.844 ] 00:07:42.844 }, 00:07:42.844 { 00:07:42.844 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:42.844 "subtype": "NVMe", 00:07:42.844 "listen_addresses": [ 00:07:42.844 { 00:07:42.844 "trtype": "TCP", 00:07:42.844 "adrfam": "IPv4", 00:07:42.844 "traddr": "10.0.0.2", 00:07:42.844 "trsvcid": "4420" 00:07:42.844 } 00:07:42.844 ], 00:07:42.844 "allow_any_host": true, 00:07:42.844 "hosts": [], 00:07:42.844 "serial_number": "SPDK00000000000002", 00:07:42.844 "model_number": "SPDK bdev Controller", 00:07:42.844 "max_namespaces": 32, 00:07:42.844 "min_cntlid": 1, 00:07:42.844 "max_cntlid": 65519, 00:07:42.844 "namespaces": [ 00:07:42.844 { 00:07:42.844 "nsid": 1, 00:07:42.844 "bdev_name": "Null2", 00:07:42.844 "name": "Null2", 00:07:42.844 "nguid": "E37CFC84274846DEBAF35528D10A63D5", 00:07:42.844 "uuid": "e37cfc84-2748-46de-baf3-5528d10a63d5" 00:07:42.844 } 00:07:42.844 ] 00:07:42.844 }, 00:07:42.844 { 00:07:42.844 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:42.844 "subtype": "NVMe", 00:07:42.844 "listen_addresses": [ 00:07:42.844 { 00:07:42.844 "trtype": "TCP", 00:07:42.844 "adrfam": "IPv4", 00:07:42.844 "traddr": "10.0.0.2", 00:07:42.844 "trsvcid": "4420" 00:07:42.844 } 00:07:42.844 ], 00:07:42.844 "allow_any_host": true, 00:07:42.844 "hosts": [], 00:07:42.844 "serial_number": "SPDK00000000000003", 00:07:42.844 "model_number": "SPDK bdev Controller", 00:07:42.844 "max_namespaces": 32, 00:07:42.844 "min_cntlid": 1, 00:07:42.844 "max_cntlid": 65519, 00:07:42.844 "namespaces": [ 00:07:42.844 { 00:07:42.844 "nsid": 1, 00:07:42.844 "bdev_name": "Null3", 00:07:42.844 "name": "Null3", 00:07:42.844 "nguid": "433B5E6B3C6746CF9B39F6C771441875", 00:07:42.844 "uuid": "433b5e6b-3c67-46cf-9b39-f6c771441875" 00:07:42.844 } 00:07:42.844 ] 00:07:42.844 }, 00:07:42.844 { 00:07:42.844 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:42.844 "subtype": "NVMe", 00:07:42.844 "listen_addresses": [ 00:07:42.844 { 00:07:42.844 "trtype": "TCP", 00:07:42.844 "adrfam": "IPv4", 00:07:42.844 "traddr": "10.0.0.2", 00:07:42.844 "trsvcid": "4420" 00:07:42.844 } 00:07:42.844 ], 00:07:42.844 "allow_any_host": true, 00:07:42.844 "hosts": [], 00:07:42.844 "serial_number": "SPDK00000000000004", 00:07:42.844 "model_number": "SPDK bdev Controller", 00:07:42.844 "max_namespaces": 32, 00:07:42.844 "min_cntlid": 1, 00:07:42.844 "max_cntlid": 65519, 00:07:42.844 "namespaces": [ 00:07:42.844 { 00:07:42.844 "nsid": 1, 00:07:42.844 "bdev_name": "Null4", 00:07:42.844 "name": "Null4", 00:07:42.844 "nguid": "B2E68BD348404594BC0B5E4D5FB37B42", 00:07:42.844 "uuid": "b2e68bd3-4840-4594-bc0b-5e4d5fb37b42" 00:07:42.844 } 00:07:42.844 ] 00:07:42.844 } 00:07:42.844 ] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.844 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.105 rmmod nvme_tcp 00:07:43.105 rmmod nvme_fabrics 00:07:43.105 rmmod nvme_keyring 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2396599 ']' 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2396599 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 2396599 ']' 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 2396599 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2396599 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2396599' 00:07:43.105 killing process with pid 2396599 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 2396599 00:07:43.105 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 2396599 00:07:43.365 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.365 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.365 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.365 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.365 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.365 08:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.365 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.365 08:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.281 08:46:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.281 00:07:45.281 real 0m10.876s 00:07:45.281 user 0m8.056s 00:07:45.281 sys 0m5.455s 00:07:45.281 08:46:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:45.281 08:46:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.281 ************************************ 00:07:45.281 END TEST nvmf_target_discovery 00:07:45.281 ************************************ 00:07:45.281 08:46:07 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.281 08:46:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:45.281 08:46:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:45.281 08:46:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.281 ************************************ 00:07:45.281 START TEST nvmf_referrals 00:07:45.281 ************************************ 00:07:45.281 08:46:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.542 * Looking for test storage... 00:07:45.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.542 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.543 08:46:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:52.135 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:52.135 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.135 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:52.136 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:52.136 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.136 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:07:52.398 00:07:52.398 --- 10.0.0.2 ping statistics --- 00:07:52.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.398 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:07:52.398 00:07:52.398 --- 10.0.0.1 ping statistics --- 00:07:52.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.398 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2401089 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2401089 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 2401089 ']' 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:52.398 08:46:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.659 [2024-06-09 08:46:14.996745] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:52.659 [2024-06-09 08:46:14.996808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.659 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.659 [2024-06-09 08:46:15.067463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.659 [2024-06-09 08:46:15.143362] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.659 [2024-06-09 08:46:15.143398] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.659 [2024-06-09 08:46:15.143410] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.659 [2024-06-09 08:46:15.143418] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.659 [2024-06-09 08:46:15.143423] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.659 [2024-06-09 08:46:15.143501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.659 [2024-06-09 08:46:15.143744] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.659 [2024-06-09 08:46:15.143904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.659 [2024-06-09 08:46:15.143904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.230 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:53.230 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:07:53.230 08:46:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.230 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:53.231 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 [2024-06-09 08:46:15.825995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 [2024-06-09 08:46:15.842167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.492 08:46:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.753 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:53.753 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:53.753 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:53.753 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.753 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.754 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.015 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.276 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:54.276 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:54.276 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:54.276 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:54.276 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:54.276 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.276 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.537 08:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.537 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:54.537 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:54.537 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:54.537 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.537 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.538 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.538 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.538 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.799 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:55.060 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:55.061 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:55.061 08:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:55.061 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.061 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:55.061 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.061 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:55.061 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.061 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.061 rmmod nvme_tcp 00:07:55.061 rmmod nvme_fabrics 00:07:55.061 rmmod nvme_keyring 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2401089 ']' 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2401089 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 2401089 ']' 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 2401089 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2401089 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2401089' 00:07:55.322 killing process with pid 2401089 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 2401089 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 2401089 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.322 08:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.886 08:46:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:57.886 00:07:57.886 real 0m12.061s 00:07:57.886 user 0m13.554s 00:07:57.886 sys 0m5.876s 00:07:57.886 08:46:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:57.886 08:46:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 ************************************ 00:07:57.886 END TEST nvmf_referrals 00:07:57.886 ************************************ 00:07:57.886 08:46:19 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:57.886 08:46:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:57.886 08:46:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:57.886 08:46:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 ************************************ 00:07:57.886 START TEST nvmf_connect_disconnect 00:07:57.886 ************************************ 00:07:57.886 08:46:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:57.886 * Looking for test storage... 00:07:57.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.886 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.887 08:46:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.539 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:04.540 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:04.540 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:04.540 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:04.540 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:08:04.540 00:08:04.540 --- 10.0.0.2 ping statistics --- 00:08:04.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.540 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.449 ms 00:08:04.540 00:08:04.540 --- 10.0.0.1 ping statistics --- 00:08:04.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.540 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2405855 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2405855 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 2405855 ']' 00:08:04.540 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.541 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:04.541 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.541 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:04.541 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.541 08:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.541 [2024-06-09 08:46:26.945896] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:08:04.541 [2024-06-09 08:46:26.945961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.541 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.541 [2024-06-09 08:46:27.015613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.541 [2024-06-09 08:46:27.089868] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.541 [2024-06-09 08:46:27.089904] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.541 [2024-06-09 08:46:27.089912] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.541 [2024-06-09 08:46:27.089918] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.541 [2024-06-09 08:46:27.089924] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.541 [2024-06-09 08:46:27.090063] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.541 [2024-06-09 08:46:27.090183] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.541 [2024-06-09 08:46:27.090341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.541 [2024-06-09 08:46:27.090342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 [2024-06-09 08:46:27.767931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.480 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:05.481 [2024-06-09 08:46:27.827305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:05.481 08:46:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:08.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.422 rmmod nvme_tcp 00:11:58.422 rmmod nvme_fabrics 00:11:58.422 rmmod nvme_keyring 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2405855 ']' 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2405855 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2405855 ']' 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 2405855 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2405855 00:11:58.422 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2405855' 00:11:58.423 killing process with pid 2405855 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 2405855 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 2405855 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.423 08:50:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.334 08:50:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:00.334 00:12:00.334 real 4m2.782s 00:12:00.334 user 15m28.461s 00:12:00.334 sys 0m21.212s 00:12:00.334 08:50:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:00.334 08:50:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.334 ************************************ 00:12:00.334 END TEST nvmf_connect_disconnect 00:12:00.334 ************************************ 00:12:00.334 08:50:22 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:00.334 08:50:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:00.334 08:50:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:00.334 08:50:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.334 ************************************ 00:12:00.334 START TEST nvmf_multitarget 00:12:00.334 ************************************ 00:12:00.334 08:50:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:00.595 * Looking for test storage... 00:12:00.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:00.595 08:50:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.253 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:07.254 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:07.254 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:07.254 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:07.254 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.254 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.516 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.516 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.516 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:07.516 08:50:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.516 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.516 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.516 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:07.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms 00:12:07.516 00:12:07.516 --- 10.0.0.2 ping statistics --- 00:12:07.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.516 rtt min/avg/max/mdev = 0.737/0.737/0.737/0.000 ms 00:12:07.516 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.481 ms 00:12:07.778 00:12:07.778 --- 10.0.0.1 ping statistics --- 00:12:07.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.778 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2457753 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2457753 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 2457753 ']' 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:07.778 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.778 [2024-06-09 08:50:30.170028] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:07.778 [2024-06-09 08:50:30.170093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.778 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.778 [2024-06-09 08:50:30.238977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.778 [2024-06-09 08:50:30.310238] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.778 [2024-06-09 08:50:30.310274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.778 [2024-06-09 08:50:30.310281] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.778 [2024-06-09 08:50:30.310288] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.778 [2024-06-09 08:50:30.310294] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.778 [2024-06-09 08:50:30.310452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.778 [2024-06-09 08:50:30.310661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.778 [2024-06-09 08:50:30.310662] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.778 [2024-06-09 08:50:30.310507] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:08.720 08:50:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:08.720 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:08.720 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:08.720 "nvmf_tgt_1" 00:12:08.720 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:08.720 "nvmf_tgt_2" 00:12:08.981 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:08.981 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:08.981 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:08.981 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:08.981 true 00:12:08.981 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:09.241 true 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.241 rmmod nvme_tcp 00:12:09.241 rmmod nvme_fabrics 00:12:09.241 rmmod nvme_keyring 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2457753 ']' 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2457753 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 2457753 ']' 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 2457753 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:09.241 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2457753 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2457753' 00:12:09.502 killing process with pid 2457753 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 2457753 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 2457753 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.502 08:50:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.047 08:50:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.047 00:12:12.047 real 0m11.190s 00:12:12.047 user 0m9.249s 00:12:12.047 sys 0m5.755s 00:12:12.047 08:50:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:12.047 08:50:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.047 ************************************ 00:12:12.047 END TEST nvmf_multitarget 00:12:12.047 ************************************ 00:12:12.047 08:50:34 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.047 08:50:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:12.047 08:50:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:12.047 08:50:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:12.047 ************************************ 00:12:12.047 START TEST nvmf_rpc 00:12:12.047 ************************************ 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.047 * Looking for test storage... 00:12:12.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.047 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.048 08:50:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.640 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:18.641 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:18.641 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:18.641 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:18.641 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.641 08:50:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:12:18.641 00:12:18.641 --- 10.0.0.2 ping statistics --- 00:12:18.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.641 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:12:18.641 00:12:18.641 --- 10.0.0.1 ping statistics --- 00:12:18.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.641 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.641 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2462245 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2462245 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 2462245 ']' 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:18.904 08:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.904 [2024-06-09 08:50:41.268980] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:18.904 [2024-06-09 08:50:41.269034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.904 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.904 [2024-06-09 08:50:41.337183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.904 [2024-06-09 08:50:41.405623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.904 [2024-06-09 08:50:41.405662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.904 [2024-06-09 08:50:41.405669] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.904 [2024-06-09 08:50:41.405675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.904 [2024-06-09 08:50:41.405681] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.904 [2024-06-09 08:50:41.405818] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.904 [2024-06-09 08:50:41.405935] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.904 [2024-06-09 08:50:41.406096] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.904 [2024-06-09 08:50:41.406097] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.478 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:19.478 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:12:19.478 08:50:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:19.478 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:19.478 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:19.740 "tick_rate": 2400000000, 00:12:19.740 "poll_groups": [ 00:12:19.740 { 00:12:19.740 "name": "nvmf_tgt_poll_group_000", 00:12:19.740 "admin_qpairs": 0, 00:12:19.740 "io_qpairs": 0, 00:12:19.740 "current_admin_qpairs": 0, 00:12:19.740 "current_io_qpairs": 0, 00:12:19.740 "pending_bdev_io": 0, 00:12:19.740 "completed_nvme_io": 0, 00:12:19.740 "transports": [] 00:12:19.740 }, 00:12:19.740 { 00:12:19.740 "name": "nvmf_tgt_poll_group_001", 00:12:19.740 "admin_qpairs": 0, 00:12:19.740 "io_qpairs": 0, 00:12:19.740 "current_admin_qpairs": 0, 00:12:19.740 "current_io_qpairs": 0, 00:12:19.740 "pending_bdev_io": 0, 00:12:19.740 "completed_nvme_io": 0, 00:12:19.740 "transports": [] 00:12:19.740 }, 00:12:19.740 { 00:12:19.740 "name": "nvmf_tgt_poll_group_002", 00:12:19.740 "admin_qpairs": 0, 00:12:19.740 "io_qpairs": 0, 00:12:19.740 "current_admin_qpairs": 0, 00:12:19.740 "current_io_qpairs": 0, 00:12:19.740 "pending_bdev_io": 0, 00:12:19.740 "completed_nvme_io": 0, 00:12:19.740 "transports": [] 00:12:19.740 }, 00:12:19.740 { 00:12:19.740 "name": "nvmf_tgt_poll_group_003", 00:12:19.740 "admin_qpairs": 0, 00:12:19.740 "io_qpairs": 0, 00:12:19.740 "current_admin_qpairs": 0, 00:12:19.740 "current_io_qpairs": 0, 00:12:19.740 "pending_bdev_io": 0, 00:12:19.740 "completed_nvme_io": 0, 00:12:19.740 "transports": [] 00:12:19.740 } 00:12:19.740 ] 00:12:19.740 }' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 [2024-06-09 08:50:42.193307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:19.740 "tick_rate": 2400000000, 00:12:19.740 "poll_groups": [ 00:12:19.740 { 00:12:19.740 "name": "nvmf_tgt_poll_group_000", 00:12:19.740 "admin_qpairs": 0, 00:12:19.740 "io_qpairs": 0, 00:12:19.740 "current_admin_qpairs": 0, 00:12:19.740 "current_io_qpairs": 0, 00:12:19.740 "pending_bdev_io": 0, 00:12:19.740 "completed_nvme_io": 0, 00:12:19.740 "transports": [ 00:12:19.740 { 00:12:19.740 "trtype": "TCP" 00:12:19.740 } 00:12:19.740 ] 00:12:19.740 }, 00:12:19.740 { 00:12:19.740 "name": "nvmf_tgt_poll_group_001", 00:12:19.740 "admin_qpairs": 0, 00:12:19.740 "io_qpairs": 0, 00:12:19.740 "current_admin_qpairs": 0, 00:12:19.740 "current_io_qpairs": 0, 00:12:19.740 "pending_bdev_io": 0, 00:12:19.740 "completed_nvme_io": 0, 00:12:19.740 "transports": [ 00:12:19.740 { 00:12:19.740 "trtype": "TCP" 00:12:19.740 } 00:12:19.740 ] 00:12:19.740 }, 00:12:19.740 { 00:12:19.740 "name": "nvmf_tgt_poll_group_002", 00:12:19.740 "admin_qpairs": 0, 00:12:19.740 "io_qpairs": 0, 00:12:19.740 "current_admin_qpairs": 0, 00:12:19.740 "current_io_qpairs": 0, 00:12:19.740 "pending_bdev_io": 0, 00:12:19.740 "completed_nvme_io": 0, 00:12:19.740 "transports": [ 00:12:19.740 { 00:12:19.740 "trtype": "TCP" 00:12:19.740 } 00:12:19.740 ] 00:12:19.740 }, 00:12:19.740 { 00:12:19.740 "name": "nvmf_tgt_poll_group_003", 00:12:19.740 "admin_qpairs": 0, 00:12:19.740 "io_qpairs": 0, 00:12:19.740 "current_admin_qpairs": 0, 00:12:19.740 "current_io_qpairs": 0, 00:12:19.740 "pending_bdev_io": 0, 00:12:19.740 "completed_nvme_io": 0, 00:12:19.740 "transports": [ 00:12:19.740 { 00:12:19.740 "trtype": "TCP" 00:12:19.740 } 00:12:19.740 ] 00:12:19.740 } 00:12:19.740 ] 00:12:19.740 }' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:19.740 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.741 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.002 Malloc1 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.002 [2024-06-09 08:50:42.358559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:20.002 [2024-06-09 08:50:42.385313] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:20.002 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:20.002 could not add new controller: failed to write to nvme-fabrics device 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.002 08:50:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.922 08:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.922 08:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:21.922 08:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.922 08:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:21.922 08:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:23.836 08:50:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.836 [2024-06-09 08:50:46.153695] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:23.836 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:23.836 could not add new controller: failed to write to nvme-fabrics device 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:23.836 08:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.219 08:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.219 08:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:25.219 08:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.219 08:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:25.219 08:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.765 [2024-06-09 08:50:49.919084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.765 08:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.150 08:50:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.150 08:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:29.150 08:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.150 08:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:29.150 08:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.064 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.325 [2024-06-09 08:50:53.657645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.325 08:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.727 08:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.727 08:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:32.727 08:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.727 08:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:32.727 08:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 [2024-06-09 08:50:57.437025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.275 08:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.657 08:50:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.657 08:50:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:36.657 08:50:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.657 08:50:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:36.657 08:50:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:38.569 08:51:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:38.569 08:51:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:38.569 08:51:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.569 08:51:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:38.569 08:51:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.569 08:51:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:38.569 08:51:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.569 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.829 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.830 [2024-06-09 08:51:01.168882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.830 08:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.214 08:51:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.214 08:51:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:40.214 08:51:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.214 08:51:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:40.214 08:51:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.759 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.760 [2024-06-09 08:51:04.893665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.760 08:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.142 08:51:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.142 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:44.142 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.142 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:44.142 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.053 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 [2024-06-09 08:51:08.621277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 [2024-06-09 08:51:08.681419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.314 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 [2024-06-09 08:51:08.745586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 [2024-06-09 08:51:08.805773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.315 [2024-06-09 08:51:08.865970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.315 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:46.576 "tick_rate": 2400000000, 00:12:46.576 "poll_groups": [ 00:12:46.576 { 00:12:46.576 "name": "nvmf_tgt_poll_group_000", 00:12:46.576 "admin_qpairs": 0, 00:12:46.576 "io_qpairs": 224, 00:12:46.576 "current_admin_qpairs": 0, 00:12:46.576 "current_io_qpairs": 0, 00:12:46.576 "pending_bdev_io": 0, 00:12:46.576 "completed_nvme_io": 230, 00:12:46.576 "transports": [ 00:12:46.576 { 00:12:46.576 "trtype": "TCP" 00:12:46.576 } 00:12:46.576 ] 00:12:46.576 }, 00:12:46.576 { 00:12:46.576 "name": "nvmf_tgt_poll_group_001", 00:12:46.576 "admin_qpairs": 1, 00:12:46.576 "io_qpairs": 223, 00:12:46.576 "current_admin_qpairs": 0, 00:12:46.576 "current_io_qpairs": 0, 00:12:46.576 "pending_bdev_io": 0, 00:12:46.576 "completed_nvme_io": 434, 00:12:46.576 "transports": [ 00:12:46.576 { 00:12:46.576 "trtype": "TCP" 00:12:46.576 } 00:12:46.576 ] 00:12:46.576 }, 00:12:46.576 { 00:12:46.576 "name": "nvmf_tgt_poll_group_002", 00:12:46.576 "admin_qpairs": 6, 00:12:46.576 "io_qpairs": 218, 00:12:46.576 "current_admin_qpairs": 0, 00:12:46.576 "current_io_qpairs": 0, 00:12:46.576 "pending_bdev_io": 0, 00:12:46.576 "completed_nvme_io": 268, 00:12:46.576 "transports": [ 00:12:46.576 { 00:12:46.576 "trtype": "TCP" 00:12:46.576 } 00:12:46.576 ] 00:12:46.576 }, 00:12:46.576 { 00:12:46.576 "name": "nvmf_tgt_poll_group_003", 00:12:46.576 "admin_qpairs": 0, 00:12:46.576 "io_qpairs": 224, 00:12:46.576 "current_admin_qpairs": 0, 00:12:46.576 "current_io_qpairs": 0, 00:12:46.576 "pending_bdev_io": 0, 00:12:46.576 "completed_nvme_io": 307, 00:12:46.576 "transports": [ 00:12:46.576 { 00:12:46.576 "trtype": "TCP" 00:12:46.576 } 00:12:46.576 ] 00:12:46.576 } 00:12:46.576 ] 00:12:46.576 }' 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:46.576 08:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.576 rmmod nvme_tcp 00:12:46.576 rmmod nvme_fabrics 00:12:46.576 rmmod nvme_keyring 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2462245 ']' 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2462245 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 2462245 ']' 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 2462245 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:46.576 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2462245 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2462245' 00:12:46.837 killing process with pid 2462245 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 2462245 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 2462245 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.837 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.384 08:51:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.384 00:12:49.384 real 0m37.285s 00:12:49.384 user 1m53.468s 00:12:49.384 sys 0m7.010s 00:12:49.384 08:51:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:49.384 08:51:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.384 ************************************ 00:12:49.384 END TEST nvmf_rpc 00:12:49.384 ************************************ 00:12:49.384 08:51:11 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:49.384 08:51:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:49.384 08:51:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:49.384 08:51:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.384 ************************************ 00:12:49.384 START TEST nvmf_invalid 00:12:49.384 ************************************ 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:49.384 * Looking for test storage... 00:12:49.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.384 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.385 08:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:55.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:55.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:55.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:55.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.974 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:56.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:12:56.235 00:12:56.235 --- 10.0.0.2 ping statistics --- 00:12:56.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.235 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.476 ms 00:12:56.235 00:12:56.235 --- 10.0.0.1 ping statistics --- 00:12:56.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.235 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2472643 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2472643 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 2472643 ']' 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:56.235 08:51:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:56.235 [2024-06-09 08:51:18.685611] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:56.235 [2024-06-09 08:51:18.685662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.235 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.235 [2024-06-09 08:51:18.750774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.495 [2024-06-09 08:51:18.816085] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.495 [2024-06-09 08:51:18.816121] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.495 [2024-06-09 08:51:18.816129] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.495 [2024-06-09 08:51:18.816136] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.495 [2024-06-09 08:51:18.816142] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.495 [2024-06-09 08:51:18.816274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.495 [2024-06-09 08:51:18.816392] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.495 [2024-06-09 08:51:18.816535] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.495 [2024-06-09 08:51:18.816535] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.066 08:51:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:57.066 08:51:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:12:57.066 08:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.066 08:51:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:57.066 08:51:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.066 08:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.066 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:57.066 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25566 00:12:57.327 [2024-06-09 08:51:19.634350] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:57.327 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:57.327 { 00:12:57.327 "nqn": "nqn.2016-06.io.spdk:cnode25566", 00:12:57.327 "tgt_name": "foobar", 00:12:57.327 "method": "nvmf_create_subsystem", 00:12:57.327 "req_id": 1 00:12:57.327 } 00:12:57.327 Got JSON-RPC error response 00:12:57.327 response: 00:12:57.327 { 00:12:57.327 "code": -32603, 00:12:57.327 "message": "Unable to find target foobar" 00:12:57.327 }' 00:12:57.327 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:57.327 { 00:12:57.327 "nqn": "nqn.2016-06.io.spdk:cnode25566", 00:12:57.327 "tgt_name": "foobar", 00:12:57.327 "method": "nvmf_create_subsystem", 00:12:57.327 "req_id": 1 00:12:57.327 } 00:12:57.327 Got JSON-RPC error response 00:12:57.327 response: 00:12:57.327 { 00:12:57.327 "code": -32603, 00:12:57.328 "message": "Unable to find target foobar" 00:12:57.328 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:57.328 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:57.328 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32650 00:12:57.328 [2024-06-09 08:51:19.810933] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32650: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:57.328 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:57.328 { 00:12:57.328 "nqn": "nqn.2016-06.io.spdk:cnode32650", 00:12:57.328 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:57.328 "method": "nvmf_create_subsystem", 00:12:57.328 "req_id": 1 00:12:57.328 } 00:12:57.328 Got JSON-RPC error response 00:12:57.328 response: 00:12:57.328 { 00:12:57.328 "code": -32602, 00:12:57.328 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:57.328 }' 00:12:57.328 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:57.328 { 00:12:57.328 "nqn": "nqn.2016-06.io.spdk:cnode32650", 00:12:57.328 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:57.328 "method": "nvmf_create_subsystem", 00:12:57.328 "req_id": 1 00:12:57.328 } 00:12:57.328 Got JSON-RPC error response 00:12:57.328 response: 00:12:57.328 { 00:12:57.328 "code": -32602, 00:12:57.328 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:57.328 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:57.328 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:57.328 08:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2639 00:12:57.589 [2024-06-09 08:51:19.987510] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2639: invalid model number 'SPDK_Controller' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:57.589 { 00:12:57.589 "nqn": "nqn.2016-06.io.spdk:cnode2639", 00:12:57.589 "model_number": "SPDK_Controller\u001f", 00:12:57.589 "method": "nvmf_create_subsystem", 00:12:57.589 "req_id": 1 00:12:57.589 } 00:12:57.589 Got JSON-RPC error response 00:12:57.589 response: 00:12:57.589 { 00:12:57.589 "code": -32602, 00:12:57.589 "message": "Invalid MN SPDK_Controller\u001f" 00:12:57.589 }' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:57.589 { 00:12:57.589 "nqn": "nqn.2016-06.io.spdk:cnode2639", 00:12:57.589 "model_number": "SPDK_Controller\u001f", 00:12:57.589 "method": "nvmf_create_subsystem", 00:12:57.589 "req_id": 1 00:12:57.589 } 00:12:57.589 Got JSON-RPC error response 00:12:57.589 response: 00:12:57.589 { 00:12:57.589 "code": -32602, 00:12:57.589 "message": "Invalid MN SPDK_Controller\u001f" 00:12:57.589 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.590 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'mx?AW$huix~]u0g\_/'\''vg' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'mx?AW$huix~]u0g\_/'\''vg' nqn.2016-06.io.spdk:cnode1303 00:12:57.885 [2024-06-09 08:51:20.320561] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1303: invalid serial number 'mx?AW$huix~]u0g\_/'vg' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:57.885 { 00:12:57.885 "nqn": "nqn.2016-06.io.spdk:cnode1303", 00:12:57.885 "serial_number": "mx?AW$huix~]u0g\\_/'\''vg", 00:12:57.885 "method": "nvmf_create_subsystem", 00:12:57.885 "req_id": 1 00:12:57.885 } 00:12:57.885 Got JSON-RPC error response 00:12:57.885 response: 00:12:57.885 { 00:12:57.885 "code": -32602, 00:12:57.885 "message": "Invalid SN mx?AW$huix~]u0g\\_/'\''vg" 00:12:57.885 }' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:57.885 { 00:12:57.885 "nqn": "nqn.2016-06.io.spdk:cnode1303", 00:12:57.885 "serial_number": "mx?AW$huix~]u0g\\_/'vg", 00:12:57.885 "method": "nvmf_create_subsystem", 00:12:57.885 "req_id": 1 00:12:57.885 } 00:12:57.885 Got JSON-RPC error response 00:12:57.885 response: 00:12:57.885 { 00:12:57.885 "code": -32602, 00:12:57.885 "message": "Invalid SN mx?AW$huix~]u0g\\_/'vg" 00:12:57.885 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.885 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:58.147 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '_}A}Lg%G,e);#/"@'\''x&wAXt,#os>-2U`Q:-JE[VW5' 00:12:58.148 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_}A}Lg%G,e);#/"@'\''x&wAXt,#os>-2U`Q:-JE[VW5' nqn.2016-06.io.spdk:cnode23968 00:12:58.407 [2024-06-09 08:51:20.802148] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23968: invalid model number '_}A}Lg%G,e);#/"@'x&wAXt,#os>-2U`Q:-JE[VW5' 00:12:58.407 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:58.407 { 00:12:58.407 "nqn": "nqn.2016-06.io.spdk:cnode23968", 00:12:58.407 "model_number": "_}A}Lg%G,e);#/\"@'\''x&wAXt,#os>-2U`Q:-JE[VW5", 00:12:58.407 "method": "nvmf_create_subsystem", 00:12:58.407 "req_id": 1 00:12:58.407 } 00:12:58.407 Got JSON-RPC error response 00:12:58.407 response: 00:12:58.407 { 00:12:58.407 "code": -32602, 00:12:58.407 "message": "Invalid MN _}A}Lg%G,e);#/\"@'\''x&wAXt,#os>-2U`Q:-JE[VW5" 00:12:58.407 }' 00:12:58.407 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:58.407 { 00:12:58.407 "nqn": "nqn.2016-06.io.spdk:cnode23968", 00:12:58.407 "model_number": "_}A}Lg%G,e);#/\"@'x&wAXt,#os>-2U`Q:-JE[VW5", 00:12:58.407 "method": "nvmf_create_subsystem", 00:12:58.408 "req_id": 1 00:12:58.408 } 00:12:58.408 Got JSON-RPC error response 00:12:58.408 response: 00:12:58.408 { 00:12:58.408 "code": -32602, 00:12:58.408 "message": "Invalid MN _}A}Lg%G,e);#/\"@'x&wAXt,#os>-2U`Q:-JE[VW5" 00:12:58.408 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:58.408 08:51:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:58.667 [2024-06-09 08:51:20.970796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.667 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:58.667 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:58.667 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:58.667 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:58.667 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:58.667 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:58.927 [2024-06-09 08:51:21.319934] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:58.927 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:58.927 { 00:12:58.927 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:58.927 "listen_address": { 00:12:58.927 "trtype": "tcp", 00:12:58.927 "traddr": "", 00:12:58.927 "trsvcid": "4421" 00:12:58.927 }, 00:12:58.927 "method": "nvmf_subsystem_remove_listener", 00:12:58.927 "req_id": 1 00:12:58.927 } 00:12:58.927 Got JSON-RPC error response 00:12:58.927 response: 00:12:58.927 { 00:12:58.927 "code": -32602, 00:12:58.927 "message": "Invalid parameters" 00:12:58.927 }' 00:12:58.927 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:58.927 { 00:12:58.927 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:58.927 "listen_address": { 00:12:58.927 "trtype": "tcp", 00:12:58.927 "traddr": "", 00:12:58.927 "trsvcid": "4421" 00:12:58.927 }, 00:12:58.927 "method": "nvmf_subsystem_remove_listener", 00:12:58.927 "req_id": 1 00:12:58.927 } 00:12:58.927 Got JSON-RPC error response 00:12:58.927 response: 00:12:58.927 { 00:12:58.927 "code": -32602, 00:12:58.927 "message": "Invalid parameters" 00:12:58.927 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:58.927 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30411 -i 0 00:12:59.187 [2024-06-09 08:51:21.492456] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30411: invalid cntlid range [0-65519] 00:12:59.187 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:59.187 { 00:12:59.187 "nqn": "nqn.2016-06.io.spdk:cnode30411", 00:12:59.187 "min_cntlid": 0, 00:12:59.187 "method": "nvmf_create_subsystem", 00:12:59.187 "req_id": 1 00:12:59.187 } 00:12:59.187 Got JSON-RPC error response 00:12:59.187 response: 00:12:59.187 { 00:12:59.187 "code": -32602, 00:12:59.187 "message": "Invalid cntlid range [0-65519]" 00:12:59.187 }' 00:12:59.187 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:59.187 { 00:12:59.187 "nqn": "nqn.2016-06.io.spdk:cnode30411", 00:12:59.187 "min_cntlid": 0, 00:12:59.187 "method": "nvmf_create_subsystem", 00:12:59.187 "req_id": 1 00:12:59.187 } 00:12:59.187 Got JSON-RPC error response 00:12:59.187 response: 00:12:59.187 { 00:12:59.187 "code": -32602, 00:12:59.187 "message": "Invalid cntlid range [0-65519]" 00:12:59.187 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.187 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31332 -i 65520 00:12:59.187 [2024-06-09 08:51:21.669055] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31332: invalid cntlid range [65520-65519] 00:12:59.187 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:59.187 { 00:12:59.187 "nqn": "nqn.2016-06.io.spdk:cnode31332", 00:12:59.187 "min_cntlid": 65520, 00:12:59.187 "method": "nvmf_create_subsystem", 00:12:59.187 "req_id": 1 00:12:59.187 } 00:12:59.187 Got JSON-RPC error response 00:12:59.187 response: 00:12:59.187 { 00:12:59.187 "code": -32602, 00:12:59.187 "message": "Invalid cntlid range [65520-65519]" 00:12:59.187 }' 00:12:59.187 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:59.187 { 00:12:59.187 "nqn": "nqn.2016-06.io.spdk:cnode31332", 00:12:59.187 "min_cntlid": 65520, 00:12:59.187 "method": "nvmf_create_subsystem", 00:12:59.187 "req_id": 1 00:12:59.187 } 00:12:59.187 Got JSON-RPC error response 00:12:59.187 response: 00:12:59.187 { 00:12:59.187 "code": -32602, 00:12:59.187 "message": "Invalid cntlid range [65520-65519]" 00:12:59.187 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.187 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25107 -I 0 00:12:59.448 [2024-06-09 08:51:21.841613] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25107: invalid cntlid range [1-0] 00:12:59.448 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:59.448 { 00:12:59.448 "nqn": "nqn.2016-06.io.spdk:cnode25107", 00:12:59.448 "max_cntlid": 0, 00:12:59.448 "method": "nvmf_create_subsystem", 00:12:59.448 "req_id": 1 00:12:59.448 } 00:12:59.448 Got JSON-RPC error response 00:12:59.448 response: 00:12:59.448 { 00:12:59.448 "code": -32602, 00:12:59.448 "message": "Invalid cntlid range [1-0]" 00:12:59.448 }' 00:12:59.448 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:59.448 { 00:12:59.448 "nqn": "nqn.2016-06.io.spdk:cnode25107", 00:12:59.448 "max_cntlid": 0, 00:12:59.448 "method": "nvmf_create_subsystem", 00:12:59.448 "req_id": 1 00:12:59.448 } 00:12:59.448 Got JSON-RPC error response 00:12:59.448 response: 00:12:59.448 { 00:12:59.448 "code": -32602, 00:12:59.448 "message": "Invalid cntlid range [1-0]" 00:12:59.448 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.448 08:51:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8540 -I 65520 00:12:59.709 [2024-06-09 08:51:22.014164] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8540: invalid cntlid range [1-65520] 00:12:59.709 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:59.709 { 00:12:59.709 "nqn": "nqn.2016-06.io.spdk:cnode8540", 00:12:59.709 "max_cntlid": 65520, 00:12:59.709 "method": "nvmf_create_subsystem", 00:12:59.709 "req_id": 1 00:12:59.709 } 00:12:59.709 Got JSON-RPC error response 00:12:59.709 response: 00:12:59.709 { 00:12:59.709 "code": -32602, 00:12:59.709 "message": "Invalid cntlid range [1-65520]" 00:12:59.709 }' 00:12:59.709 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:59.709 { 00:12:59.709 "nqn": "nqn.2016-06.io.spdk:cnode8540", 00:12:59.709 "max_cntlid": 65520, 00:12:59.709 "method": "nvmf_create_subsystem", 00:12:59.709 "req_id": 1 00:12:59.709 } 00:12:59.709 Got JSON-RPC error response 00:12:59.709 response: 00:12:59.709 { 00:12:59.709 "code": -32602, 00:12:59.709 "message": "Invalid cntlid range [1-65520]" 00:12:59.709 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.709 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13446 -i 6 -I 5 00:12:59.709 [2024-06-09 08:51:22.178690] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13446: invalid cntlid range [6-5] 00:12:59.709 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:59.709 { 00:12:59.709 "nqn": "nqn.2016-06.io.spdk:cnode13446", 00:12:59.709 "min_cntlid": 6, 00:12:59.709 "max_cntlid": 5, 00:12:59.709 "method": "nvmf_create_subsystem", 00:12:59.709 "req_id": 1 00:12:59.709 } 00:12:59.709 Got JSON-RPC error response 00:12:59.709 response: 00:12:59.709 { 00:12:59.709 "code": -32602, 00:12:59.709 "message": "Invalid cntlid range [6-5]" 00:12:59.709 }' 00:12:59.709 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:59.709 { 00:12:59.709 "nqn": "nqn.2016-06.io.spdk:cnode13446", 00:12:59.709 "min_cntlid": 6, 00:12:59.709 "max_cntlid": 5, 00:12:59.709 "method": "nvmf_create_subsystem", 00:12:59.709 "req_id": 1 00:12:59.709 } 00:12:59.709 Got JSON-RPC error response 00:12:59.709 response: 00:12:59.709 { 00:12:59.709 "code": -32602, 00:12:59.709 "message": "Invalid cntlid range [6-5]" 00:12:59.709 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.709 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:59.970 { 00:12:59.970 "name": "foobar", 00:12:59.970 "method": "nvmf_delete_target", 00:12:59.970 "req_id": 1 00:12:59.970 } 00:12:59.970 Got JSON-RPC error response 00:12:59.970 response: 00:12:59.970 { 00:12:59.970 "code": -32602, 00:12:59.970 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:59.970 }' 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:59.970 { 00:12:59.970 "name": "foobar", 00:12:59.970 "method": "nvmf_delete_target", 00:12:59.970 "req_id": 1 00:12:59.970 } 00:12:59.970 Got JSON-RPC error response 00:12:59.970 response: 00:12:59.970 { 00:12:59.970 "code": -32602, 00:12:59.970 "message": "The specified target doesn't exist, cannot delete it." 00:12:59.970 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.970 rmmod nvme_tcp 00:12:59.970 rmmod nvme_fabrics 00:12:59.970 rmmod nvme_keyring 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2472643 ']' 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2472643 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 2472643 ']' 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 2472643 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2472643 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2472643' 00:12:59.970 killing process with pid 2472643 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 2472643 00:12:59.970 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 2472643 00:13:00.230 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.231 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.231 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.231 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.231 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.231 08:51:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.231 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.231 08:51:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.141 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:02.141 00:13:02.141 real 0m13.180s 00:13:02.141 user 0m19.146s 00:13:02.141 sys 0m6.117s 00:13:02.141 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:02.141 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.141 ************************************ 00:13:02.141 END TEST nvmf_invalid 00:13:02.141 ************************************ 00:13:02.141 08:51:24 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:02.141 08:51:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:02.141 08:51:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:02.141 08:51:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.401 ************************************ 00:13:02.401 START TEST nvmf_abort 00:13:02.401 ************************************ 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:02.401 * Looking for test storage... 00:13:02.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:02.401 08:51:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:08.986 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:08.986 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:08.986 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.986 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:08.987 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.987 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.761 ms 00:13:09.247 00:13:09.247 --- 10.0.0.2 ping statistics --- 00:13:09.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.247 rtt min/avg/max/mdev = 0.761/0.761/0.761/0.000 ms 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:13:09.247 00:13:09.247 --- 10.0.0.1 ping statistics --- 00:13:09.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.247 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2477503 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2477503 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 2477503 ']' 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:09.247 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.508 [2024-06-09 08:51:31.831805] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:09.508 [2024-06-09 08:51:31.831869] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.508 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.508 [2024-06-09 08:51:31.920511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.508 [2024-06-09 08:51:32.011804] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.508 [2024-06-09 08:51:32.011859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.508 [2024-06-09 08:51:32.011867] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.508 [2024-06-09 08:51:32.011874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.508 [2024-06-09 08:51:32.011881] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.508 [2024-06-09 08:51:32.012017] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.508 [2024-06-09 08:51:32.012184] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.508 [2024-06-09 08:51:32.012185] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.079 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:10.079 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:13:10.080 08:51:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.080 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:10.080 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 [2024-06-09 08:51:32.665662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 Malloc0 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 Delay0 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 [2024-06-09 08:51:32.747810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.340 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:10.340 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.340 [2024-06-09 08:51:32.869055] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:12.882 Initializing NVMe Controllers 00:13:12.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:12.882 controller IO queue size 128 less than required 00:13:12.882 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:12.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:12.882 Initialization complete. Launching workers. 00:13:12.882 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 27829 00:13:12.882 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27892, failed to submit 62 00:13:12.882 success 27833, unsuccess 59, failed 0 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.882 rmmod nvme_tcp 00:13:12.882 rmmod nvme_fabrics 00:13:12.882 rmmod nvme_keyring 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2477503 ']' 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2477503 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 2477503 ']' 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 2477503 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2477503 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2477503' 00:13:12.882 killing process with pid 2477503 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 2477503 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 2477503 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.882 08:51:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.425 08:51:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.425 00:13:15.425 real 0m12.684s 00:13:15.425 user 0m13.588s 00:13:15.425 sys 0m6.178s 00:13:15.425 08:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:15.425 08:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.425 ************************************ 00:13:15.425 END TEST nvmf_abort 00:13:15.425 ************************************ 00:13:15.425 08:51:37 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:15.425 08:51:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:15.425 08:51:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:15.425 08:51:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.425 ************************************ 00:13:15.425 START TEST nvmf_ns_hotplug_stress 00:13:15.425 ************************************ 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:15.425 * Looking for test storage... 00:13:15.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.425 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.426 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:22.013 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:22.013 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:22.013 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:22.013 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.013 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:13:22.274 00:13:22.274 --- 10.0.0.2 ping statistics --- 00:13:22.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.274 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:13:22.274 00:13:22.274 --- 10.0.0.1 ping statistics --- 00:13:22.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.274 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2482514 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2482514 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 2482514 ']' 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:22.274 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.274 [2024-06-09 08:51:44.809059] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:22.274 [2024-06-09 08:51:44.809126] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.535 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.535 [2024-06-09 08:51:44.899834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.535 [2024-06-09 08:51:44.992932] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.535 [2024-06-09 08:51:44.992989] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.535 [2024-06-09 08:51:44.992998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.535 [2024-06-09 08:51:44.993005] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.535 [2024-06-09 08:51:44.993011] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.535 [2024-06-09 08:51:44.993145] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.535 [2024-06-09 08:51:44.993183] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.535 [2024-06-09 08:51:44.993194] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.106 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:23.106 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:13:23.106 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.106 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:23.106 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.106 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:23.106 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:23.367 [2024-06-09 08:51:45.777818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.367 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:23.626 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.626 [2024-06-09 08:51:46.115228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.626 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.886 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:24.145 Malloc0 00:13:24.145 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:24.145 Delay0 00:13:24.145 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.404 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:24.694 NULL1 00:13:24.694 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:24.694 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2482886 00:13:24.694 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:24.694 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:24.694 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.694 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.077 Read completed with error (sct=0, sc=11) 00:13:26.077 08:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.077 08:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:26.077 08:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:26.077 true 00:13:26.077 08:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:26.077 08:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.018 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.278 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:27.278 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:27.278 true 00:13:27.278 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:27.278 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.539 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.798 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:27.799 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:27.799 true 00:13:27.799 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:27.799 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.058 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.319 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:28.319 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:28.319 true 00:13:28.319 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:28.319 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.579 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.579 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:28.579 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:28.840 true 00:13:28.840 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:28.840 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.101 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.101 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:29.101 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:29.362 true 00:13:29.362 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:29.362 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.623 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.623 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:29.623 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:29.884 true 00:13:29.884 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:29.884 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.144 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.144 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:30.144 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:30.405 true 00:13:30.405 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:30.405 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.405 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.676 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:30.676 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:30.943 true 00:13:30.943 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:30.943 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.943 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.204 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:31.204 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:31.204 true 00:13:31.465 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:31.465 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.407 08:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.407 08:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:32.407 08:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:32.407 true 00:13:32.668 08:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:32.668 08:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.668 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.929 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:32.929 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:32.929 true 00:13:32.929 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:32.929 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.189 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.478 [2024-06-09 08:51:55.779535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.779991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.780019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.780046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.780073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.780103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.478 [2024-06-09 08:51:55.780129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.780976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.781983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.782980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.479 [2024-06-09 08:51:55.783589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.783974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.784715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.785975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.786981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.480 [2024-06-09 08:51:55.787010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.787982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.788987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.481 [2024-06-09 08:51:55.789750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.789994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.790989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 Message suppressed 999 times: [2024-06-09 08:51:55.791729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 Read completed with error (sct=0, sc=15) 00:13:33.482 [2024-06-09 08:51:55.791758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.791981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.482 [2024-06-09 08:51:55.792297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.792971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.793984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.483 [2024-06-09 08:51:55.794778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.794807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.794834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.794862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.794892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.794920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.794949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.794976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.795999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.796980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.797978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.798009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.798038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.798064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.798090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.798118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.798143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.484 [2024-06-09 08:51:55.798175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.798924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.799987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.800989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.485 [2024-06-09 08:51:55.801487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.801988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.802991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.803999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.486 [2024-06-09 08:51:55.804590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.804617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.804645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.804673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.804708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.804735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.804959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.804989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.805973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.806980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.487 [2024-06-09 08:51:55.807591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.807973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:33.488 [2024-06-09 08:51:55.808138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:33.488 [2024-06-09 08:51:55.808559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.808997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.809971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.810000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.488 [2024-06-09 08:51:55.810029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.810986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.811897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.812993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.489 [2024-06-09 08:51:55.813305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.813995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.814977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.815993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.490 [2024-06-09 08:51:55.816748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.816991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.817990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.818988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.819990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.820018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.820046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.491 [2024-06-09 08:51:55.820072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.820986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.821799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.822986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.492 [2024-06-09 08:51:55.823228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.823932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.824968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.493 [2024-06-09 08:51:55.825160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.825987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.826021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.493 [2024-06-09 08:51:55.826049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.826966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.827975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.828996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.494 [2024-06-09 08:51:55.829629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.829998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.830605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.831982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.832991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.833020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.833048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.833077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.833110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.833141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.495 [2024-06-09 08:51:55.833168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.833987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.834976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.835984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.496 [2024-06-09 08:51:55.836303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.836714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.837997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.838979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.497 [2024-06-09 08:51:55.839265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.839979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.840993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.841989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.498 [2024-06-09 08:51:55.842396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.842984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.843986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.844979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.499 [2024-06-09 08:51:55.845877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.845899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.845925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.845958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.845991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.846967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.847933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.848995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.849023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.849049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.849075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.500 [2024-06-09 08:51:55.849104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.849992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.850993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.851993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.501 [2024-06-09 08:51:55.852922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.852956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.852985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.853986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.854837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.502 [2024-06-09 08:51:55.855734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.855986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.856829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.857981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.858993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.859021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.859053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.503 [2024-06-09 08:51:55.859084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.504 [2024-06-09 08:51:55.859503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.859980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.860988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.861983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.862006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.862029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.862053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.862080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.504 [2024-06-09 08:51:55.862111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.862942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.863986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.864649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.505 [2024-06-09 08:51:55.865717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.865981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.866907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.867985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.506 [2024-06-09 08:51:55.868482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.868968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.869990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.870972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.507 [2024-06-09 08:51:55.871751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.871984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.872979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.873975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.874983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.875012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.875040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.875071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.875097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.875127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.875154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.875186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.508 [2024-06-09 08:51:55.875215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.875985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.876974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.877941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.509 [2024-06-09 08:51:55.878833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.878863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.878891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.878921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.878950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.878979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.879991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.880998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.881992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.510 [2024-06-09 08:51:55.882023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.882975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.883982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.884987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.885016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.885044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.885072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.885100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.885127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.885158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.885187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.511 [2024-06-09 08:51:55.885217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.885980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.886846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.887978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.512 [2024-06-09 08:51:55.888504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.888984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.889993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.890972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.891000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.891028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.891061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.891086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.891111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.891141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.513 [2024-06-09 08:51:55.891169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.891996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.892998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.893990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.514 [2024-06-09 08:51:55.894571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.514 [2024-06-09 08:51:55.894597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.894994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.895976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.896980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.515 [2024-06-09 08:51:55.897799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.897830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.897859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.897888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.897916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.897942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.897967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.897994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.898995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.899979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.516 [2024-06-09 08:51:55.900774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.900800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.900830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.900857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.900891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.900920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.900949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.900977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.901808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.902991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.903968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.517 [2024-06-09 08:51:55.904005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.904989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.905979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.906999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.518 [2024-06-09 08:51:55.907541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.907996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.908832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.909999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.910983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.911011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.911041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.519 [2024-06-09 08:51:55.911067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.911978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.912975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.520 [2024-06-09 08:51:55.913777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.913803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.913833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.913864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.913892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.913920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.913946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.913981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.914952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.915995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.916977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.521 [2024-06-09 08:51:55.917309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.917972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.918997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.919991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.920980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.921005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.921033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.522 [2024-06-09 08:51:55.921062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.921893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.922997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.523 [2024-06-09 08:51:55.923923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.923955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.923996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.924965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.925985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.926979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.524 [2024-06-09 08:51:55.927458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.927993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.928993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.525 [2024-06-09 08:51:55.929368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.929977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.525 [2024-06-09 08:51:55.930351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.930733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.931975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.932926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.933985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.934017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.934043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.934075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.934102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.934136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.934166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.526 [2024-06-09 08:51:55.934193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.934995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.935974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.936994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.937976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.938005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.938032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.527 [2024-06-09 08:51:55.938060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.938975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.939726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.940988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.528 [2024-06-09 08:51:55.941614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.941968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.942982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.529 [2024-06-09 08:51:55.943790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.943817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.943846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.943872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.943901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.943930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.943967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.943994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.944974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.945960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 true 00:13:33.530 [2024-06-09 08:51:55.946225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.946985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.530 [2024-06-09 08:51:55.947724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.947758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.947793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.947821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.947852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.947881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.947909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.947945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.947979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.948748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.949987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.950969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.951002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.951332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.951367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.951394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.951426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.951453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.531 [2024-06-09 08:51:55.951491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.951995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.952999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.953974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.532 [2024-06-09 08:51:55.954943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.954972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.954998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.955975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.956999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.957987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.533 [2024-06-09 08:51:55.958343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.958997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.959837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.960985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.534 [2024-06-09 08:51:55.961630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.961990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.962972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.963996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.535 [2024-06-09 08:51:55.964767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.535 [2024-06-09 08:51:55.964989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.965974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.966978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.967983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.968014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.968044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.536 [2024-06-09 08:51:55.968072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.968973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.969976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.970767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.537 [2024-06-09 08:51:55.971691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.971985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:33.538 [2024-06-09 08:51:55.972074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.538 [2024-06-09 08:51:55.972423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.972989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.973980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.974006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.974032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.974058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.974084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.974111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.974141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.974174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.538 [2024-06-09 08:51:55.974203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.974991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.975997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.976973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.539 [2024-06-09 08:51:55.977592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.977988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.978993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.979971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.540 [2024-06-09 08:51:55.980488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.980985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.981986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.982722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.541 [2024-06-09 08:51:55.983903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.983931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.983956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.983989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.984991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.985977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.542 [2024-06-09 08:51:55.986873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.986910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.986939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.986974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.987972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.988990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.543 [2024-06-09 08:51:55.989449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.989990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.990978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.991663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.544 [2024-06-09 08:51:55.992666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.992991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.993877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.994433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.995994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.996022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.996053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.996080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.996107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.996142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.545 [2024-06-09 08:51:55.996169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.996983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.997977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.546 [2024-06-09 08:51:55.998958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.998982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 Message suppressed 999 times: [2024-06-09 08:51:55.999896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 Read completed with error (sct=0, sc=15) 00:13:33.547 [2024-06-09 08:51:55.999933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:55.999998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.000987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.001982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.002010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.002039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.002064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.002091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.002118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.547 [2024-06-09 08:51:56.002146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.002995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.003985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.548 [2024-06-09 08:51:56.004896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.004952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.004978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.005819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.006993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.007999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.008028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.008058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.008083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.008113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.008140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.549 [2024-06-09 08:51:56.008170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.550 [2024-06-09 08:51:56.008432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.837 [2024-06-09 08:51:56.008465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.837 [2024-06-09 08:51:56.008493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.837 [2024-06-09 08:51:56.008521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.008965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.009976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.010991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.011021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.011051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.011082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.011109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.838 [2024-06-09 08:51:56.011142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.011870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.012971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.013982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.839 [2024-06-09 08:51:56.014362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.014984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.015998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.016964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.017741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.018010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.018043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.840 [2024-06-09 08:51:56.018072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.018980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.019995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.020987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.021016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.021050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.021079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.841 [2024-06-09 08:51:56.021104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.021989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.022974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.842 [2024-06-09 08:51:56.023685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.023995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.024974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.025980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.026986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.027013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.027042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.843 [2024-06-09 08:51:56.027069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.027976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.028996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.029980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.844 [2024-06-09 08:51:56.030250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.030804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.031999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.032937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.845 [2024-06-09 08:51:56.033740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.033771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.033798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.033848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.033877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.033911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.033940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.033967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.033995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 Message suppressed 999 times: [2024-06-09 08:51:56.034540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 Read completed with error (sct=0, sc=15) 00:13:33.846 [2024-06-09 08:51:56.034571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.034978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.035990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.036989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.037015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.037045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.037072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.037100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.037126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.037154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.846 [2024-06-09 08:51:56.037184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.037988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.038987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.847 [2024-06-09 08:51:56.039748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.039994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.040845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.041972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.042979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.848 [2024-06-09 08:51:56.043008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.043990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.044972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.045991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.849 [2024-06-09 08:51:56.046394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.046988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.047974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.048988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.850 [2024-06-09 08:51:56.049443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.049747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.050990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.051919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.851 [2024-06-09 08:51:56.052740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.052770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.052797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.052824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.052854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.052880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.052907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.052953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.052981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.053971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.054997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.852 [2024-06-09 08:51:56.055476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.055977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.056992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.057989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.058994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.059025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.853 [2024-06-09 08:51:56.059054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.059981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.060846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.061989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.062017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.062046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.062072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.062099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.854 [2024-06-09 08:51:56.062127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.062981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.063990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.064964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.855 [2024-06-09 08:51:56.065679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.065985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.066979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.067990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.856 [2024-06-09 08:51:56.068635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.857 [2024-06-09 08:51:56.068781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.068991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.069673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.070994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.071022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.071051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.071080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.071110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.071137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.071166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.071201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.857 [2024-06-09 08:51:56.071230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.071858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.072974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.073998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.858 [2024-06-09 08:51:56.074674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.074992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.075990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.076988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.859 [2024-06-09 08:51:56.077610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.077974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.078988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.079984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.860 [2024-06-09 08:51:56.080666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.080713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.081972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.082991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.861 [2024-06-09 08:51:56.083863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.083891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.083924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.083951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.083984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.084991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.085976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.862 [2024-06-09 08:51:56.086926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.086952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.086975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.087643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.088985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.089835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.090190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.090223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.090252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.090280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.090309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.090336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.863 [2024-06-09 08:51:56.090364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.090990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.091999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.092968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.864 [2024-06-09 08:51:56.093737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.093769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.093799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.093824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.093852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.093886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.093916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.093948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.093976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.094999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.095999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.865 [2024-06-09 08:51:56.096814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.096841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.097997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.098999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.866 [2024-06-09 08:51:56.099688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.099996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.100995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.101990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.867 [2024-06-09 08:51:56.102942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.102971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.102999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.103997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.868 [2024-06-09 08:51:56.104555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.104995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.105738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.868 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.868 [2024-06-09 08:51:56.283913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.283957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.868 [2024-06-09 08:51:56.283987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.284982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.285677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.286992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.869 [2024-06-09 08:51:56.287021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.287891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.288975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.289976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.870 [2024-06-09 08:51:56.290002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.290976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.291976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.871 [2024-06-09 08:51:56.292987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.293998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.294979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.295974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.296002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.296024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.296055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.872 [2024-06-09 08:51:56.296084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.296658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.297985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.298863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.873 [2024-06-09 08:51:56.299449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.299997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.300917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.874 [2024-06-09 08:51:56.301438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.301976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.302002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.302029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.302058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.302087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.302115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.874 [2024-06-09 08:51:56.302145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.302997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.303985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.875 [2024-06-09 08:51:56.304988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.305985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.306995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.876 [2024-06-09 08:51:56.307248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.307977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.308986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.309995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.877 [2024-06-09 08:51:56.310733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.310981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.311729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:33.878 [2024-06-09 08:51:56.312157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:33.878 [2024-06-09 08:51:56.312338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.312974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.878 [2024-06-09 08:51:56.313665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.313918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.314980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.315980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.316989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.317011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.317039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.317065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.317093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.317122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.317150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.317175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.879 [2024-06-09 08:51:56.317202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.317974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.318986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.319988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.880 [2024-06-09 08:51:56.320269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.320994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.321979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.322674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.881 [2024-06-09 08:51:56.323570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.323996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.324916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.325996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.882 [2024-06-09 08:51:56.326687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.326993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.327977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.328990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.329998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.883 [2024-06-09 08:51:56.330291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.330999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.331969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.884 [2024-06-09 08:51:56.332591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.332983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.333716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.334974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.885 [2024-06-09 08:51:56.335166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.885 [2024-06-09 08:51:56.335727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.335753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.335780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.335806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.335832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.335858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.335885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.336988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.337987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.338989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.886 [2024-06-09 08:51:56.339021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.339995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.340650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.341986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.342019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.342046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.342074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.342100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.342127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.342155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.342180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.887 [2024-06-09 08:51:56.342210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.342914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.343970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.344992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.888 [2024-06-09 08:51:56.345881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.345913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.345940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.345968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.345995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.346971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.347979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.348983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.349017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.349044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.349071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.349097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.349125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.889 [2024-06-09 08:51:56.349152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.349987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.350978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.351862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.352293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.352331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.890 [2024-06-09 08:51:56.352357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.352977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.353992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.354998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.891 [2024-06-09 08:51:56.355502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.355978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.356981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.357998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.358976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.359004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.359029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.892 [2024-06-09 08:51:56.359058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.359985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.360705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.361980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.362008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.362041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.362066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.362096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.362124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.362154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.362183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.893 [2024-06-09 08:51:56.362218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.362860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.363995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.364988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.894 [2024-06-09 08:51:56.365811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.365858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.365886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.365914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.365942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.365972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.366977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.367988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.895 [2024-06-09 08:51:56.368757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.368784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.369993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:33.896 [2024-06-09 08:51:56.370020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:33.896 [2024-06-09 08:51:56.370299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.370977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.187 [2024-06-09 08:51:56.371564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.371946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.372972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.373976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.374977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.188 [2024-06-09 08:51:56.375006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.375991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.376976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.377994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.378021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.378043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.378071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.378097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.378128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.378158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.378187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.189 [2024-06-09 08:51:56.378215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.378979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.379995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.380751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.190 [2024-06-09 08:51:56.381952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.381988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.382997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.383991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.384968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.385000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.385028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.191 [2024-06-09 08:51:56.385081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.385998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.386994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.387984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.388018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.388046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.388079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.192 [2024-06-09 08:51:56.388108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.388981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.389659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.390974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.193 [2024-06-09 08:51:56.391369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.391945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.392984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.393993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.394999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.194 [2024-06-09 08:51:56.395305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.395979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.396667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.397956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.195 [2024-06-09 08:51:56.398496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.398854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.399978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.400982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.196 [2024-06-09 08:51:56.401616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.401998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.402984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.403989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.197 [2024-06-09 08:51:56.404795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.404822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.404850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.404877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.404904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.404934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.404963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.404990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:34.198 [2024-06-09 08:51:56.405834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.405979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.406991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.407802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.198 [2024-06-09 08:51:56.408516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.408976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.409971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.410989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.199 [2024-06-09 08:51:56.411676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.411986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.412987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.413978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.414989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.415017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.415044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.415071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.415104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.415138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.415175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.200 [2024-06-09 08:51:56.415210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.415998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.416986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.417987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.418013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.201 [2024-06-09 08:51:56.418038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.418874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.419975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.420999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.202 [2024-06-09 08:51:56.421806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.421834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.421869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.421898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.421933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.421960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.421990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.422983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.423998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.424989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.425016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.425045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.425072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.425099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.425127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.203 [2024-06-09 08:51:56.425153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.425753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.426980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.427977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.204 [2024-06-09 08:51:56.428764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.428791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.428818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.428846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.428874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.428901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.428932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.428963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.428993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.429992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.430983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.205 [2024-06-09 08:51:56.431589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.431987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.432973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.433999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.434648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.435002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.435031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.435059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.435092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.206 [2024-06-09 08:51:56.435120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.435998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.436911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.437987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.207 [2024-06-09 08:51:56.438397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.438972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.439998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.440976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:34.208 [2024-06-09 08:51:56.441920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.441974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.442003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.442030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.442059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.442085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.442113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.442146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.442177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.208 [2024-06-09 08:51:56.442206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.442985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.443745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.444990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.209 [2024-06-09 08:51:56.445424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.445967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.446976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.447972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.210 [2024-06-09 08:51:56.448781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.448815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.448843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.448872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.448901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.448931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.448957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.448984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.449986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 true 00:13:34.211 [2024-06-09 08:51:56.450095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.450985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.451998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.211 [2024-06-09 08:51:56.452025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.452897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.453994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.454986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.212 [2024-06-09 08:51:56.455695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.455984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.456968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.457988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.213 [2024-06-09 08:51:56.458916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.458946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.458974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.459979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.460985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.214 [2024-06-09 08:51:56.461667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.461695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.462993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.463848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.464976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.465001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.465031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.465060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.465086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.215 [2024-06-09 08:51:56.465113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.465993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.466991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.467985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.216 [2024-06-09 08:51:56.468576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.468973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.469968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.470986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.217 [2024-06-09 08:51:56.471851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.471884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.471911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.471961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.471991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.472860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.473994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.474967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:34.218 [2024-06-09 08:51:56.475519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.218 [2024-06-09 08:51:56.475583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.219 [2024-06-09 08:51:56.475890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.475975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:34.219 [2024-06-09 08:51:56.476060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.476991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.477989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.219 [2024-06-09 08:51:56.478370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.478977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.479984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.480951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.220 [2024-06-09 08:51:56.481659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.481689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.481722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.482977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.483910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.484973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.221 [2024-06-09 08:51:56.485253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.485991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.486994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.487972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.222 [2024-06-09 08:51:56.488773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.488800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.488827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.488858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.488886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.488917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.488943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.488972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.488999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.489982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.490993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.223 [2024-06-09 08:51:56.491763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.491791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.491817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.491845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.491877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.491907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.491936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.491963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.491989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.492981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.493983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.224 [2024-06-09 08:51:56.494836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.494867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.494893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.494922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.495996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.496992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.497982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.225 [2024-06-09 08:51:56.498357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.498988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.499993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.500972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.226 [2024-06-09 08:51:56.501783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.501810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.501838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.501863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.501895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.501921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.501953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.501980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.502998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.503978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.504987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.505016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.505067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.505095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.505122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.505151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.505181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.505209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.227 [2024-06-09 08:51:56.505237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.505998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.506980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.228 [2024-06-09 08:51:56.507995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.508998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.509998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.510983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:34.229 [2024-06-09 08:51:56.511350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.229 [2024-06-09 08:51:56.511575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.511973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.512998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.513981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.230 [2024-06-09 08:51:56.514762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.514789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.514819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.514847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.514875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.514905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.514930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.514960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.514987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.515995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.516967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.231 [2024-06-09 08:51:56.517303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.517975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.518003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.232 [2024-06-09 08:51:56.518034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.518994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.519721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.520994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.521027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.521054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.521082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.521108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.521137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.521162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.521188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.233 [2024-06-09 08:51:56.521215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.521842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.522994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.234 [2024-06-09 08:51:56.523841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.523872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.523898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.523928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.523955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.523979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.524991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.525989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.526988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.235 [2024-06-09 08:51:56.527497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.527977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.528981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.529990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.236 [2024-06-09 08:51:56.530699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.530988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.531934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.532988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.533993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.534019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.534047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.534070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.534102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.534133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.534162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.534192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.237 [2024-06-09 08:51:56.534220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.534850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.535983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.536975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.238 [2024-06-09 08:51:56.537000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.537978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.538978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.539984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.239 [2024-06-09 08:51:56.540500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.540987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.541849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.542967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.543738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.544095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.544122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.544151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.544179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.544212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.544238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.240 [2024-06-09 08:51:56.544266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.544996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.545998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:34.241 [2024-06-09 08:51:56.546597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.546994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.241 [2024-06-09 08:51:56.547426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.547993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.548987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.549995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.242 [2024-06-09 08:51:56.550947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.550978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.551981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.243 [2024-06-09 08:51:56.552370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.552399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.552431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.552457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.552485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.552919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.552950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.552979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.553999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.554994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.555976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.244 [2024-06-09 08:51:56.556007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.556971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.557997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.558980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.245 [2024-06-09 08:51:56.559257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.559993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.560976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.561997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.246 [2024-06-09 08:51:56.562894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.562925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.562955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.562983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.563994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.564977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.565990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.566016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.566045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.566074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.566100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.566131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.566160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.247 [2024-06-09 08:51:56.566191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.566997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.567981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.568912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.248 [2024-06-09 08:51:56.569318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.569980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.570971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.571977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.249 [2024-06-09 08:51:56.572804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.572829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.572855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.572878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.572908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.572938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.572970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.573972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.574986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.575981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.250 [2024-06-09 08:51:56.576328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.576984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.577992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.578778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.251 [2024-06-09 08:51:56.579809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.579841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.579870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.579899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.579929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.579959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.579987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.580737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:34.252 [2024-06-09 08:51:56.581654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.581998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.252 [2024-06-09 08:51:56.582671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.582982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.583988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.584987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.585990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.253 [2024-06-09 08:51:56.586366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.586982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.587986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.588994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.589994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.590023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.590051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.590079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.254 [2024-06-09 08:51:56.590107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.590997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.591981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.592998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.255 [2024-06-09 08:51:56.593664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.593691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.594982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.595813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.596993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.597030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.597057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.597087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.597116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.597146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.597173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.256 [2024-06-09 08:51:56.597201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.597995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.598981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.599977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.600986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.257 [2024-06-09 08:51:56.601280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.601987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.602993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.603975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.604995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.258 [2024-06-09 08:51:56.605383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.605886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.606977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.607978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.608989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.259 [2024-06-09 08:51:56.609518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.609975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.610987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.611970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.612986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.260 [2024-06-09 08:51:56.613426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.613993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 [2024-06-09 08:51:56.614622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.261 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:34.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.261 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.522 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:34.522 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:34.522 [2024-06-09 08:51:56.845526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:34.522 true 00:13:34.522 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:34.522 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.783 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.783 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:34.783 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:35.043 true 00:13:35.043 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:35.043 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.304 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.304 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:35.304 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:35.565 true 00:13:35.565 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:35.565 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.507 08:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.767 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:36.767 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:36.767 true 00:13:36.767 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:36.767 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.028 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.028 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:37.028 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:37.289 true 00:13:37.289 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:37.289 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.550 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.550 08:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:37.550 08:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:37.811 true 00:13:37.811 08:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:37.811 08:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.753 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.753 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:38.753 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:39.013 true 00:13:39.014 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:39.014 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.274 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.275 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:39.275 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:39.534 true 00:13:39.534 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:39.534 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.534 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.794 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:39.794 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:40.054 true 00:13:40.054 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:40.054 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.054 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.315 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:40.315 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:40.575 true 00:13:40.575 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:40.575 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.575 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.834 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:40.834 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:40.834 true 00:13:40.834 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:40.834 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.093 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.353 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:41.353 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:41.353 true 00:13:41.353 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:41.353 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.614 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.875 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:41.875 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:41.875 true 00:13:41.875 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:41.875 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.817 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.078 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:43.078 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:43.078 true 00:13:43.339 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:43.339 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.339 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.600 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:43.600 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:43.600 true 00:13:43.600 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:43.600 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.862 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.122 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:44.122 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:44.122 true 00:13:44.122 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:44.122 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.382 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.642 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:44.642 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:44.642 true 00:13:44.642 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:44.642 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.902 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.163 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:13:45.163 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:45.163 true 00:13:45.163 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:45.163 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.423 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.423 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:13:45.423 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:45.683 true 00:13:45.683 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:45.683 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.943 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.943 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:13:45.943 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:46.203 true 00:13:46.203 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:46.204 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.144 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.404 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:13:47.404 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:47.404 true 00:13:47.404 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:47.404 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.665 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.925 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:13:47.925 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:47.925 true 00:13:47.925 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:47.925 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.186 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.186 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:13:48.186 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:48.446 true 00:13:48.446 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:48.446 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.707 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.707 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:13:48.707 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:13:48.968 true 00:13:48.968 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:48.968 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.968 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.229 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:13:49.229 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:13:49.229 true 00:13:49.490 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:49.490 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.490 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.751 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:13:49.751 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:13:49.751 true 00:13:49.751 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:49.751 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.012 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.274 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:13:50.274 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:13:50.274 true 00:13:50.274 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:50.274 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.265 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.526 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:13:51.526 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:13:51.787 true 00:13:51.787 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:51.787 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.787 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.048 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:13:52.048 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:13:52.048 true 00:13:52.309 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:52.309 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.309 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.571 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:13:52.571 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:13:52.571 true 00:13:52.571 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:52.571 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.831 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.091 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:13:53.091 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:13:53.091 true 00:13:53.091 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:53.091 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.350 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.610 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:13:53.610 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:13:53.610 true 00:13:53.610 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:53.610 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.553 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.814 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:13:54.814 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:13:54.814 true 00:13:54.814 Initializing NVMe Controllers 00:13:54.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.814 Controller IO queue size 128, less than required. 00:13:54.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.814 Controller IO queue size 128, less than required. 00:13:54.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:54.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:54.815 Initialization complete. Launching workers. 00:13:54.815 ======================================================== 00:13:54.815 Latency(us) 00:13:54.815 Device Information : IOPS MiB/s Average min max 00:13:54.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1786.38 0.87 26773.13 1593.05 1137310.28 00:13:54.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11133.47 5.44 11497.79 2389.70 401454.32 00:13:54.815 ======================================================== 00:13:54.815 Total : 12919.85 6.31 13609.86 1593.05 1137310.28 00:13:54.815 00:13:54.815 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2482886 00:13:54.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2482886) - No such process 00:13:54.815 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2482886 00:13:54.815 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.075 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.336 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:55.336 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:55.336 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:55.336 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.336 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:55.336 null0 00:13:55.336 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.336 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.336 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:55.596 null1 00:13:55.596 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.596 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.596 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:55.596 null2 00:13:55.596 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.596 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.596 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:55.858 null3 00:13:55.858 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.858 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.858 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:56.119 null4 00:13:56.119 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.119 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.119 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:56.119 null5 00:13:56.119 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.119 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.119 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:56.381 null6 00:13:56.381 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.381 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.381 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:56.642 null7 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.642 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.643 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.643 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2489378 2489379 2489381 2489383 2489386 2489388 2489390 2489392 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.643 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.904 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.904 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.905 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.166 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.167 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.428 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.428 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.429 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.690 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.690 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.690 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.690 08:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.690 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.952 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.953 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.214 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.215 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.215 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.215 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.215 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.215 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.215 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.476 08:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.738 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.739 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.739 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.739 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.739 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.739 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.999 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.000 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.261 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.522 08:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.522 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.522 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.522 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.522 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.522 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.522 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.523 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.523 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.523 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.523 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.523 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.784 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.785 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.785 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.785 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.785 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.785 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.046 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.306 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.307 rmmod nvme_tcp 00:14:00.307 rmmod nvme_fabrics 00:14:00.307 rmmod nvme_keyring 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2482514 ']' 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2482514 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 2482514 ']' 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 2482514 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2482514 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2482514' 00:14:00.307 killing process with pid 2482514 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 2482514 00:14:00.307 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 2482514 00:14:00.574 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.574 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.574 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.574 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.574 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.574 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.574 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.574 08:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.491 08:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.491 00:14:02.491 real 0m47.458s 00:14:02.491 user 3m9.277s 00:14:02.491 sys 0m15.021s 00:14:02.491 08:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:02.491 08:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.491 ************************************ 00:14:02.491 END TEST nvmf_ns_hotplug_stress 00:14:02.491 ************************************ 00:14:02.491 08:52:24 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.491 08:52:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:02.491 08:52:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:02.491 08:52:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.491 ************************************ 00:14:02.491 START TEST nvmf_connect_stress 00:14:02.491 ************************************ 00:14:02.491 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.753 * Looking for test storage... 00:14:02.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.753 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:09.345 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:09.345 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:09.345 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.345 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:09.346 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.346 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.607 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.607 08:52:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:14:09.607 00:14:09.607 --- 10.0.0.2 ping statistics --- 00:14:09.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.607 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.485 ms 00:14:09.607 00:14:09.607 --- 10.0.0.1 ping statistics --- 00:14:09.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.607 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.607 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2494446 00:14:09.608 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2494446 00:14:09.608 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:09.608 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 2494446 ']' 00:14:09.608 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.608 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:09.608 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.608 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:09.608 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.608 [2024-06-09 08:52:32.125783] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:09.608 [2024-06-09 08:52:32.125851] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.608 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.869 [2024-06-09 08:52:32.212925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.869 [2024-06-09 08:52:32.307088] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.869 [2024-06-09 08:52:32.307148] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.869 [2024-06-09 08:52:32.307157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.869 [2024-06-09 08:52:32.307164] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.869 [2024-06-09 08:52:32.307170] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.869 [2024-06-09 08:52:32.307314] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.869 [2024-06-09 08:52:32.307469] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.869 [2024-06-09 08:52:32.307504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.441 [2024-06-09 08:52:32.952697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.441 [2024-06-09 08:52:32.984568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.441 08:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.701 NULL1 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2494569 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.701 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.961 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.961 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:10.961 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.961 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.961 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.221 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.221 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:11.221 08:52:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.221 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.221 08:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.820 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.820 08:52:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:11.820 08:52:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.820 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.820 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.081 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.081 08:52:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:12.081 08:52:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.081 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.081 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.341 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.341 08:52:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:12.341 08:52:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.341 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.341 08:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.601 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.601 08:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:12.601 08:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.601 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.602 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.861 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.861 08:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:12.861 08:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.861 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.861 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.432 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.432 08:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:13.432 08:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.432 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.432 08:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.692 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.692 08:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:13.692 08:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.692 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.692 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.952 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.952 08:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:13.953 08:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.953 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.953 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.213 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.213 08:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:14.213 08:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.213 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.213 08:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.473 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.473 08:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:14.473 08:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.473 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.473 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.061 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.061 08:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:15.061 08:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.061 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.061 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.322 08:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:15.322 08:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.322 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.322 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.583 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.583 08:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:15.583 08:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.583 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.583 08:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.843 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.843 08:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:15.843 08:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.843 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.843 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.104 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.104 08:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:16.104 08:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.104 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.104 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.676 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.676 08:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:16.676 08:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.676 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.676 08:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.937 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.937 08:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:16.937 08:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.937 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.937 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.198 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.198 08:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:17.198 08:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.198 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.198 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.459 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.459 08:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:17.459 08:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.459 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.459 08:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.721 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.721 08:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:17.721 08:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.721 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.721 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.293 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.293 08:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:18.293 08:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.293 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.293 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.553 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.553 08:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:18.553 08:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.553 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.553 08:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.814 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.814 08:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:18.814 08:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.814 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.814 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.075 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.075 08:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:19.075 08:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.075 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.075 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.335 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.335 08:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:19.335 08:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.335 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.335 08:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.907 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.907 08:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:19.907 08:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.907 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.907 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.168 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.168 08:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:20.168 08:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.168 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:20.168 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.428 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.428 08:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:20.428 08:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.428 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:20.428 08:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.689 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.689 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.689 08:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2494569 00:14:20.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2494569) - No such process 00:14:20.689 08:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2494569 00:14:20.689 08:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:20.689 08:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:20.689 08:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:20.690 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.690 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:20.690 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.690 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:20.690 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.690 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.690 rmmod nvme_tcp 00:14:20.690 rmmod nvme_fabrics 00:14:20.690 rmmod nvme_keyring 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2494446 ']' 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2494446 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 2494446 ']' 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 2494446 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:20.949 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2494446 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2494446' 00:14:20.950 killing process with pid 2494446 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 2494446 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 2494446 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.950 08:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.497 08:52:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.497 00:14:23.497 real 0m20.491s 00:14:23.497 user 0m41.634s 00:14:23.497 sys 0m8.594s 00:14:23.497 08:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:23.497 08:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.497 ************************************ 00:14:23.497 END TEST nvmf_connect_stress 00:14:23.497 ************************************ 00:14:23.497 08:52:45 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:23.497 08:52:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:23.497 08:52:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:23.497 08:52:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.497 ************************************ 00:14:23.497 START TEST nvmf_fused_ordering 00:14:23.497 ************************************ 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:23.497 * Looking for test storage... 00:14:23.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.497 08:52:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:30.084 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:30.084 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:30.084 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:30.084 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:30.084 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:30.085 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:30.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:14:30.346 00:14:30.346 --- 10.0.0.2 ping statistics --- 00:14:30.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.346 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:14:30.346 00:14:30.346 --- 10.0.0.1 ping statistics --- 00:14:30.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.346 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2500859 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2500859 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 2500859 ']' 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:30.346 08:52:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.346 [2024-06-09 08:52:52.829430] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:30.346 [2024-06-09 08:52:52.829496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.346 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.607 [2024-06-09 08:52:52.916105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.607 [2024-06-09 08:52:53.008978] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.607 [2024-06-09 08:52:53.009039] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.607 [2024-06-09 08:52:53.009047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.607 [2024-06-09 08:52:53.009054] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.607 [2024-06-09 08:52:53.009060] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.607 [2024-06-09 08:52:53.009098] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 [2024-06-09 08:52:53.668123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 [2024-06-09 08:52:53.692410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 NULL1 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.180 08:52:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:31.441 [2024-06-09 08:52:53.747880] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:31.441 [2024-06-09 08:52:53.747913] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500935 ] 00:14:31.441 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.013 Attached to nqn.2016-06.io.spdk:cnode1 00:14:32.013 Namespace ID: 1 size: 1GB 00:14:32.013 fused_ordering(0) 00:14:32.013 fused_ordering(1) 00:14:32.013 fused_ordering(2) 00:14:32.013 fused_ordering(3) 00:14:32.013 fused_ordering(4) 00:14:32.013 fused_ordering(5) 00:14:32.013 fused_ordering(6) 00:14:32.013 fused_ordering(7) 00:14:32.013 fused_ordering(8) 00:14:32.013 fused_ordering(9) 00:14:32.013 fused_ordering(10) 00:14:32.013 fused_ordering(11) 00:14:32.013 fused_ordering(12) 00:14:32.013 fused_ordering(13) 00:14:32.013 fused_ordering(14) 00:14:32.013 fused_ordering(15) 00:14:32.013 fused_ordering(16) 00:14:32.013 fused_ordering(17) 00:14:32.013 fused_ordering(18) 00:14:32.013 fused_ordering(19) 00:14:32.013 fused_ordering(20) 00:14:32.013 fused_ordering(21) 00:14:32.013 fused_ordering(22) 00:14:32.013 fused_ordering(23) 00:14:32.013 fused_ordering(24) 00:14:32.013 fused_ordering(25) 00:14:32.013 fused_ordering(26) 00:14:32.013 fused_ordering(27) 00:14:32.013 fused_ordering(28) 00:14:32.013 fused_ordering(29) 00:14:32.013 fused_ordering(30) 00:14:32.013 fused_ordering(31) 00:14:32.013 fused_ordering(32) 00:14:32.013 fused_ordering(33) 00:14:32.013 fused_ordering(34) 00:14:32.013 fused_ordering(35) 00:14:32.013 fused_ordering(36) 00:14:32.013 fused_ordering(37) 00:14:32.013 fused_ordering(38) 00:14:32.013 fused_ordering(39) 00:14:32.013 fused_ordering(40) 00:14:32.013 fused_ordering(41) 00:14:32.014 fused_ordering(42) 00:14:32.014 fused_ordering(43) 00:14:32.014 fused_ordering(44) 00:14:32.014 fused_ordering(45) 00:14:32.014 fused_ordering(46) 00:14:32.014 fused_ordering(47) 00:14:32.014 fused_ordering(48) 00:14:32.014 fused_ordering(49) 00:14:32.014 fused_ordering(50) 00:14:32.014 fused_ordering(51) 00:14:32.014 fused_ordering(52) 00:14:32.014 fused_ordering(53) 00:14:32.014 fused_ordering(54) 00:14:32.014 fused_ordering(55) 00:14:32.014 fused_ordering(56) 00:14:32.014 fused_ordering(57) 00:14:32.014 fused_ordering(58) 00:14:32.014 fused_ordering(59) 00:14:32.014 fused_ordering(60) 00:14:32.014 fused_ordering(61) 00:14:32.014 fused_ordering(62) 00:14:32.014 fused_ordering(63) 00:14:32.014 fused_ordering(64) 00:14:32.014 fused_ordering(65) 00:14:32.014 fused_ordering(66) 00:14:32.014 fused_ordering(67) 00:14:32.014 fused_ordering(68) 00:14:32.014 fused_ordering(69) 00:14:32.014 fused_ordering(70) 00:14:32.014 fused_ordering(71) 00:14:32.014 fused_ordering(72) 00:14:32.014 fused_ordering(73) 00:14:32.014 fused_ordering(74) 00:14:32.014 fused_ordering(75) 00:14:32.014 fused_ordering(76) 00:14:32.014 fused_ordering(77) 00:14:32.014 fused_ordering(78) 00:14:32.014 fused_ordering(79) 00:14:32.014 fused_ordering(80) 00:14:32.014 fused_ordering(81) 00:14:32.014 fused_ordering(82) 00:14:32.014 fused_ordering(83) 00:14:32.014 fused_ordering(84) 00:14:32.014 fused_ordering(85) 00:14:32.014 fused_ordering(86) 00:14:32.014 fused_ordering(87) 00:14:32.014 fused_ordering(88) 00:14:32.014 fused_ordering(89) 00:14:32.014 fused_ordering(90) 00:14:32.014 fused_ordering(91) 00:14:32.014 fused_ordering(92) 00:14:32.014 fused_ordering(93) 00:14:32.014 fused_ordering(94) 00:14:32.014 fused_ordering(95) 00:14:32.014 fused_ordering(96) 00:14:32.014 fused_ordering(97) 00:14:32.014 fused_ordering(98) 00:14:32.014 fused_ordering(99) 00:14:32.014 fused_ordering(100) 00:14:32.014 fused_ordering(101) 00:14:32.014 fused_ordering(102) 00:14:32.014 fused_ordering(103) 00:14:32.014 fused_ordering(104) 00:14:32.014 fused_ordering(105) 00:14:32.014 fused_ordering(106) 00:14:32.014 fused_ordering(107) 00:14:32.014 fused_ordering(108) 00:14:32.014 fused_ordering(109) 00:14:32.014 fused_ordering(110) 00:14:32.014 fused_ordering(111) 00:14:32.014 fused_ordering(112) 00:14:32.014 fused_ordering(113) 00:14:32.014 fused_ordering(114) 00:14:32.014 fused_ordering(115) 00:14:32.014 fused_ordering(116) 00:14:32.014 fused_ordering(117) 00:14:32.014 fused_ordering(118) 00:14:32.014 fused_ordering(119) 00:14:32.014 fused_ordering(120) 00:14:32.014 fused_ordering(121) 00:14:32.014 fused_ordering(122) 00:14:32.014 fused_ordering(123) 00:14:32.014 fused_ordering(124) 00:14:32.014 fused_ordering(125) 00:14:32.014 fused_ordering(126) 00:14:32.014 fused_ordering(127) 00:14:32.014 fused_ordering(128) 00:14:32.014 fused_ordering(129) 00:14:32.014 fused_ordering(130) 00:14:32.014 fused_ordering(131) 00:14:32.014 fused_ordering(132) 00:14:32.014 fused_ordering(133) 00:14:32.014 fused_ordering(134) 00:14:32.014 fused_ordering(135) 00:14:32.014 fused_ordering(136) 00:14:32.014 fused_ordering(137) 00:14:32.014 fused_ordering(138) 00:14:32.014 fused_ordering(139) 00:14:32.014 fused_ordering(140) 00:14:32.014 fused_ordering(141) 00:14:32.014 fused_ordering(142) 00:14:32.014 fused_ordering(143) 00:14:32.014 fused_ordering(144) 00:14:32.014 fused_ordering(145) 00:14:32.014 fused_ordering(146) 00:14:32.014 fused_ordering(147) 00:14:32.014 fused_ordering(148) 00:14:32.014 fused_ordering(149) 00:14:32.014 fused_ordering(150) 00:14:32.014 fused_ordering(151) 00:14:32.014 fused_ordering(152) 00:14:32.014 fused_ordering(153) 00:14:32.014 fused_ordering(154) 00:14:32.014 fused_ordering(155) 00:14:32.014 fused_ordering(156) 00:14:32.014 fused_ordering(157) 00:14:32.014 fused_ordering(158) 00:14:32.014 fused_ordering(159) 00:14:32.014 fused_ordering(160) 00:14:32.014 fused_ordering(161) 00:14:32.014 fused_ordering(162) 00:14:32.014 fused_ordering(163) 00:14:32.014 fused_ordering(164) 00:14:32.014 fused_ordering(165) 00:14:32.014 fused_ordering(166) 00:14:32.014 fused_ordering(167) 00:14:32.014 fused_ordering(168) 00:14:32.014 fused_ordering(169) 00:14:32.014 fused_ordering(170) 00:14:32.014 fused_ordering(171) 00:14:32.014 fused_ordering(172) 00:14:32.014 fused_ordering(173) 00:14:32.014 fused_ordering(174) 00:14:32.014 fused_ordering(175) 00:14:32.014 fused_ordering(176) 00:14:32.014 fused_ordering(177) 00:14:32.014 fused_ordering(178) 00:14:32.014 fused_ordering(179) 00:14:32.014 fused_ordering(180) 00:14:32.014 fused_ordering(181) 00:14:32.014 fused_ordering(182) 00:14:32.014 fused_ordering(183) 00:14:32.014 fused_ordering(184) 00:14:32.014 fused_ordering(185) 00:14:32.014 fused_ordering(186) 00:14:32.014 fused_ordering(187) 00:14:32.014 fused_ordering(188) 00:14:32.014 fused_ordering(189) 00:14:32.014 fused_ordering(190) 00:14:32.014 fused_ordering(191) 00:14:32.014 fused_ordering(192) 00:14:32.014 fused_ordering(193) 00:14:32.014 fused_ordering(194) 00:14:32.014 fused_ordering(195) 00:14:32.014 fused_ordering(196) 00:14:32.014 fused_ordering(197) 00:14:32.014 fused_ordering(198) 00:14:32.014 fused_ordering(199) 00:14:32.014 fused_ordering(200) 00:14:32.014 fused_ordering(201) 00:14:32.014 fused_ordering(202) 00:14:32.014 fused_ordering(203) 00:14:32.014 fused_ordering(204) 00:14:32.014 fused_ordering(205) 00:14:32.586 fused_ordering(206) 00:14:32.586 fused_ordering(207) 00:14:32.586 fused_ordering(208) 00:14:32.586 fused_ordering(209) 00:14:32.586 fused_ordering(210) 00:14:32.586 fused_ordering(211) 00:14:32.586 fused_ordering(212) 00:14:32.586 fused_ordering(213) 00:14:32.586 fused_ordering(214) 00:14:32.586 fused_ordering(215) 00:14:32.586 fused_ordering(216) 00:14:32.586 fused_ordering(217) 00:14:32.586 fused_ordering(218) 00:14:32.586 fused_ordering(219) 00:14:32.586 fused_ordering(220) 00:14:32.586 fused_ordering(221) 00:14:32.586 fused_ordering(222) 00:14:32.586 fused_ordering(223) 00:14:32.586 fused_ordering(224) 00:14:32.586 fused_ordering(225) 00:14:32.586 fused_ordering(226) 00:14:32.586 fused_ordering(227) 00:14:32.586 fused_ordering(228) 00:14:32.586 fused_ordering(229) 00:14:32.586 fused_ordering(230) 00:14:32.586 fused_ordering(231) 00:14:32.586 fused_ordering(232) 00:14:32.586 fused_ordering(233) 00:14:32.586 fused_ordering(234) 00:14:32.586 fused_ordering(235) 00:14:32.586 fused_ordering(236) 00:14:32.586 fused_ordering(237) 00:14:32.586 fused_ordering(238) 00:14:32.586 fused_ordering(239) 00:14:32.586 fused_ordering(240) 00:14:32.586 fused_ordering(241) 00:14:32.586 fused_ordering(242) 00:14:32.586 fused_ordering(243) 00:14:32.586 fused_ordering(244) 00:14:32.586 fused_ordering(245) 00:14:32.586 fused_ordering(246) 00:14:32.586 fused_ordering(247) 00:14:32.586 fused_ordering(248) 00:14:32.586 fused_ordering(249) 00:14:32.586 fused_ordering(250) 00:14:32.586 fused_ordering(251) 00:14:32.586 fused_ordering(252) 00:14:32.586 fused_ordering(253) 00:14:32.586 fused_ordering(254) 00:14:32.586 fused_ordering(255) 00:14:32.586 fused_ordering(256) 00:14:32.586 fused_ordering(257) 00:14:32.586 fused_ordering(258) 00:14:32.586 fused_ordering(259) 00:14:32.586 fused_ordering(260) 00:14:32.586 fused_ordering(261) 00:14:32.586 fused_ordering(262) 00:14:32.586 fused_ordering(263) 00:14:32.586 fused_ordering(264) 00:14:32.586 fused_ordering(265) 00:14:32.586 fused_ordering(266) 00:14:32.586 fused_ordering(267) 00:14:32.586 fused_ordering(268) 00:14:32.586 fused_ordering(269) 00:14:32.586 fused_ordering(270) 00:14:32.586 fused_ordering(271) 00:14:32.586 fused_ordering(272) 00:14:32.586 fused_ordering(273) 00:14:32.586 fused_ordering(274) 00:14:32.586 fused_ordering(275) 00:14:32.586 fused_ordering(276) 00:14:32.586 fused_ordering(277) 00:14:32.586 fused_ordering(278) 00:14:32.586 fused_ordering(279) 00:14:32.587 fused_ordering(280) 00:14:32.587 fused_ordering(281) 00:14:32.587 fused_ordering(282) 00:14:32.587 fused_ordering(283) 00:14:32.587 fused_ordering(284) 00:14:32.587 fused_ordering(285) 00:14:32.587 fused_ordering(286) 00:14:32.587 fused_ordering(287) 00:14:32.587 fused_ordering(288) 00:14:32.587 fused_ordering(289) 00:14:32.587 fused_ordering(290) 00:14:32.587 fused_ordering(291) 00:14:32.587 fused_ordering(292) 00:14:32.587 fused_ordering(293) 00:14:32.587 fused_ordering(294) 00:14:32.587 fused_ordering(295) 00:14:32.587 fused_ordering(296) 00:14:32.587 fused_ordering(297) 00:14:32.587 fused_ordering(298) 00:14:32.587 fused_ordering(299) 00:14:32.587 fused_ordering(300) 00:14:32.587 fused_ordering(301) 00:14:32.587 fused_ordering(302) 00:14:32.587 fused_ordering(303) 00:14:32.587 fused_ordering(304) 00:14:32.587 fused_ordering(305) 00:14:32.587 fused_ordering(306) 00:14:32.587 fused_ordering(307) 00:14:32.587 fused_ordering(308) 00:14:32.587 fused_ordering(309) 00:14:32.587 fused_ordering(310) 00:14:32.587 fused_ordering(311) 00:14:32.587 fused_ordering(312) 00:14:32.587 fused_ordering(313) 00:14:32.587 fused_ordering(314) 00:14:32.587 fused_ordering(315) 00:14:32.587 fused_ordering(316) 00:14:32.587 fused_ordering(317) 00:14:32.587 fused_ordering(318) 00:14:32.587 fused_ordering(319) 00:14:32.587 fused_ordering(320) 00:14:32.587 fused_ordering(321) 00:14:32.587 fused_ordering(322) 00:14:32.587 fused_ordering(323) 00:14:32.587 fused_ordering(324) 00:14:32.587 fused_ordering(325) 00:14:32.587 fused_ordering(326) 00:14:32.587 fused_ordering(327) 00:14:32.587 fused_ordering(328) 00:14:32.587 fused_ordering(329) 00:14:32.587 fused_ordering(330) 00:14:32.587 fused_ordering(331) 00:14:32.587 fused_ordering(332) 00:14:32.587 fused_ordering(333) 00:14:32.587 fused_ordering(334) 00:14:32.587 fused_ordering(335) 00:14:32.587 fused_ordering(336) 00:14:32.587 fused_ordering(337) 00:14:32.587 fused_ordering(338) 00:14:32.587 fused_ordering(339) 00:14:32.587 fused_ordering(340) 00:14:32.587 fused_ordering(341) 00:14:32.587 fused_ordering(342) 00:14:32.587 fused_ordering(343) 00:14:32.587 fused_ordering(344) 00:14:32.587 fused_ordering(345) 00:14:32.587 fused_ordering(346) 00:14:32.587 fused_ordering(347) 00:14:32.587 fused_ordering(348) 00:14:32.587 fused_ordering(349) 00:14:32.587 fused_ordering(350) 00:14:32.587 fused_ordering(351) 00:14:32.587 fused_ordering(352) 00:14:32.587 fused_ordering(353) 00:14:32.587 fused_ordering(354) 00:14:32.587 fused_ordering(355) 00:14:32.587 fused_ordering(356) 00:14:32.587 fused_ordering(357) 00:14:32.587 fused_ordering(358) 00:14:32.587 fused_ordering(359) 00:14:32.587 fused_ordering(360) 00:14:32.587 fused_ordering(361) 00:14:32.587 fused_ordering(362) 00:14:32.587 fused_ordering(363) 00:14:32.587 fused_ordering(364) 00:14:32.587 fused_ordering(365) 00:14:32.587 fused_ordering(366) 00:14:32.587 fused_ordering(367) 00:14:32.587 fused_ordering(368) 00:14:32.587 fused_ordering(369) 00:14:32.587 fused_ordering(370) 00:14:32.587 fused_ordering(371) 00:14:32.587 fused_ordering(372) 00:14:32.587 fused_ordering(373) 00:14:32.587 fused_ordering(374) 00:14:32.587 fused_ordering(375) 00:14:32.587 fused_ordering(376) 00:14:32.587 fused_ordering(377) 00:14:32.587 fused_ordering(378) 00:14:32.587 fused_ordering(379) 00:14:32.587 fused_ordering(380) 00:14:32.587 fused_ordering(381) 00:14:32.587 fused_ordering(382) 00:14:32.587 fused_ordering(383) 00:14:32.587 fused_ordering(384) 00:14:32.587 fused_ordering(385) 00:14:32.587 fused_ordering(386) 00:14:32.587 fused_ordering(387) 00:14:32.587 fused_ordering(388) 00:14:32.587 fused_ordering(389) 00:14:32.587 fused_ordering(390) 00:14:32.587 fused_ordering(391) 00:14:32.587 fused_ordering(392) 00:14:32.587 fused_ordering(393) 00:14:32.587 fused_ordering(394) 00:14:32.587 fused_ordering(395) 00:14:32.587 fused_ordering(396) 00:14:32.587 fused_ordering(397) 00:14:32.587 fused_ordering(398) 00:14:32.587 fused_ordering(399) 00:14:32.587 fused_ordering(400) 00:14:32.587 fused_ordering(401) 00:14:32.587 fused_ordering(402) 00:14:32.587 fused_ordering(403) 00:14:32.587 fused_ordering(404) 00:14:32.587 fused_ordering(405) 00:14:32.587 fused_ordering(406) 00:14:32.587 fused_ordering(407) 00:14:32.587 fused_ordering(408) 00:14:32.587 fused_ordering(409) 00:14:32.587 fused_ordering(410) 00:14:33.226 fused_ordering(411) 00:14:33.226 fused_ordering(412) 00:14:33.226 fused_ordering(413) 00:14:33.226 fused_ordering(414) 00:14:33.226 fused_ordering(415) 00:14:33.226 fused_ordering(416) 00:14:33.226 fused_ordering(417) 00:14:33.226 fused_ordering(418) 00:14:33.226 fused_ordering(419) 00:14:33.226 fused_ordering(420) 00:14:33.226 fused_ordering(421) 00:14:33.226 fused_ordering(422) 00:14:33.226 fused_ordering(423) 00:14:33.226 fused_ordering(424) 00:14:33.226 fused_ordering(425) 00:14:33.226 fused_ordering(426) 00:14:33.226 fused_ordering(427) 00:14:33.226 fused_ordering(428) 00:14:33.226 fused_ordering(429) 00:14:33.226 fused_ordering(430) 00:14:33.226 fused_ordering(431) 00:14:33.226 fused_ordering(432) 00:14:33.226 fused_ordering(433) 00:14:33.226 fused_ordering(434) 00:14:33.226 fused_ordering(435) 00:14:33.226 fused_ordering(436) 00:14:33.226 fused_ordering(437) 00:14:33.226 fused_ordering(438) 00:14:33.226 fused_ordering(439) 00:14:33.226 fused_ordering(440) 00:14:33.226 fused_ordering(441) 00:14:33.226 fused_ordering(442) 00:14:33.226 fused_ordering(443) 00:14:33.226 fused_ordering(444) 00:14:33.226 fused_ordering(445) 00:14:33.226 fused_ordering(446) 00:14:33.226 fused_ordering(447) 00:14:33.226 fused_ordering(448) 00:14:33.226 fused_ordering(449) 00:14:33.226 fused_ordering(450) 00:14:33.226 fused_ordering(451) 00:14:33.226 fused_ordering(452) 00:14:33.226 fused_ordering(453) 00:14:33.226 fused_ordering(454) 00:14:33.226 fused_ordering(455) 00:14:33.226 fused_ordering(456) 00:14:33.226 fused_ordering(457) 00:14:33.226 fused_ordering(458) 00:14:33.226 fused_ordering(459) 00:14:33.226 fused_ordering(460) 00:14:33.226 fused_ordering(461) 00:14:33.226 fused_ordering(462) 00:14:33.226 fused_ordering(463) 00:14:33.226 fused_ordering(464) 00:14:33.226 fused_ordering(465) 00:14:33.226 fused_ordering(466) 00:14:33.226 fused_ordering(467) 00:14:33.226 fused_ordering(468) 00:14:33.226 fused_ordering(469) 00:14:33.226 fused_ordering(470) 00:14:33.226 fused_ordering(471) 00:14:33.226 fused_ordering(472) 00:14:33.226 fused_ordering(473) 00:14:33.226 fused_ordering(474) 00:14:33.226 fused_ordering(475) 00:14:33.226 fused_ordering(476) 00:14:33.226 fused_ordering(477) 00:14:33.226 fused_ordering(478) 00:14:33.226 fused_ordering(479) 00:14:33.226 fused_ordering(480) 00:14:33.226 fused_ordering(481) 00:14:33.226 fused_ordering(482) 00:14:33.226 fused_ordering(483) 00:14:33.226 fused_ordering(484) 00:14:33.226 fused_ordering(485) 00:14:33.226 fused_ordering(486) 00:14:33.226 fused_ordering(487) 00:14:33.226 fused_ordering(488) 00:14:33.226 fused_ordering(489) 00:14:33.226 fused_ordering(490) 00:14:33.226 fused_ordering(491) 00:14:33.226 fused_ordering(492) 00:14:33.226 fused_ordering(493) 00:14:33.226 fused_ordering(494) 00:14:33.226 fused_ordering(495) 00:14:33.226 fused_ordering(496) 00:14:33.226 fused_ordering(497) 00:14:33.226 fused_ordering(498) 00:14:33.226 fused_ordering(499) 00:14:33.226 fused_ordering(500) 00:14:33.226 fused_ordering(501) 00:14:33.226 fused_ordering(502) 00:14:33.226 fused_ordering(503) 00:14:33.226 fused_ordering(504) 00:14:33.226 fused_ordering(505) 00:14:33.226 fused_ordering(506) 00:14:33.226 fused_ordering(507) 00:14:33.226 fused_ordering(508) 00:14:33.226 fused_ordering(509) 00:14:33.226 fused_ordering(510) 00:14:33.226 fused_ordering(511) 00:14:33.226 fused_ordering(512) 00:14:33.226 fused_ordering(513) 00:14:33.226 fused_ordering(514) 00:14:33.226 fused_ordering(515) 00:14:33.226 fused_ordering(516) 00:14:33.226 fused_ordering(517) 00:14:33.226 fused_ordering(518) 00:14:33.226 fused_ordering(519) 00:14:33.226 fused_ordering(520) 00:14:33.226 fused_ordering(521) 00:14:33.226 fused_ordering(522) 00:14:33.226 fused_ordering(523) 00:14:33.226 fused_ordering(524) 00:14:33.226 fused_ordering(525) 00:14:33.226 fused_ordering(526) 00:14:33.226 fused_ordering(527) 00:14:33.226 fused_ordering(528) 00:14:33.226 fused_ordering(529) 00:14:33.226 fused_ordering(530) 00:14:33.226 fused_ordering(531) 00:14:33.226 fused_ordering(532) 00:14:33.226 fused_ordering(533) 00:14:33.226 fused_ordering(534) 00:14:33.226 fused_ordering(535) 00:14:33.226 fused_ordering(536) 00:14:33.226 fused_ordering(537) 00:14:33.226 fused_ordering(538) 00:14:33.226 fused_ordering(539) 00:14:33.226 fused_ordering(540) 00:14:33.226 fused_ordering(541) 00:14:33.226 fused_ordering(542) 00:14:33.226 fused_ordering(543) 00:14:33.226 fused_ordering(544) 00:14:33.226 fused_ordering(545) 00:14:33.226 fused_ordering(546) 00:14:33.226 fused_ordering(547) 00:14:33.226 fused_ordering(548) 00:14:33.226 fused_ordering(549) 00:14:33.226 fused_ordering(550) 00:14:33.226 fused_ordering(551) 00:14:33.226 fused_ordering(552) 00:14:33.226 fused_ordering(553) 00:14:33.226 fused_ordering(554) 00:14:33.226 fused_ordering(555) 00:14:33.226 fused_ordering(556) 00:14:33.226 fused_ordering(557) 00:14:33.226 fused_ordering(558) 00:14:33.226 fused_ordering(559) 00:14:33.226 fused_ordering(560) 00:14:33.226 fused_ordering(561) 00:14:33.226 fused_ordering(562) 00:14:33.226 fused_ordering(563) 00:14:33.226 fused_ordering(564) 00:14:33.226 fused_ordering(565) 00:14:33.226 fused_ordering(566) 00:14:33.226 fused_ordering(567) 00:14:33.226 fused_ordering(568) 00:14:33.226 fused_ordering(569) 00:14:33.226 fused_ordering(570) 00:14:33.226 fused_ordering(571) 00:14:33.226 fused_ordering(572) 00:14:33.226 fused_ordering(573) 00:14:33.226 fused_ordering(574) 00:14:33.226 fused_ordering(575) 00:14:33.226 fused_ordering(576) 00:14:33.226 fused_ordering(577) 00:14:33.226 fused_ordering(578) 00:14:33.226 fused_ordering(579) 00:14:33.226 fused_ordering(580) 00:14:33.226 fused_ordering(581) 00:14:33.226 fused_ordering(582) 00:14:33.226 fused_ordering(583) 00:14:33.226 fused_ordering(584) 00:14:33.226 fused_ordering(585) 00:14:33.226 fused_ordering(586) 00:14:33.226 fused_ordering(587) 00:14:33.226 fused_ordering(588) 00:14:33.226 fused_ordering(589) 00:14:33.226 fused_ordering(590) 00:14:33.226 fused_ordering(591) 00:14:33.226 fused_ordering(592) 00:14:33.226 fused_ordering(593) 00:14:33.226 fused_ordering(594) 00:14:33.226 fused_ordering(595) 00:14:33.226 fused_ordering(596) 00:14:33.226 fused_ordering(597) 00:14:33.226 fused_ordering(598) 00:14:33.226 fused_ordering(599) 00:14:33.226 fused_ordering(600) 00:14:33.226 fused_ordering(601) 00:14:33.226 fused_ordering(602) 00:14:33.226 fused_ordering(603) 00:14:33.226 fused_ordering(604) 00:14:33.226 fused_ordering(605) 00:14:33.226 fused_ordering(606) 00:14:33.226 fused_ordering(607) 00:14:33.226 fused_ordering(608) 00:14:33.226 fused_ordering(609) 00:14:33.226 fused_ordering(610) 00:14:33.226 fused_ordering(611) 00:14:33.226 fused_ordering(612) 00:14:33.226 fused_ordering(613) 00:14:33.226 fused_ordering(614) 00:14:33.226 fused_ordering(615) 00:14:34.168 fused_ordering(616) 00:14:34.168 fused_ordering(617) 00:14:34.168 fused_ordering(618) 00:14:34.168 fused_ordering(619) 00:14:34.168 fused_ordering(620) 00:14:34.168 fused_ordering(621) 00:14:34.168 fused_ordering(622) 00:14:34.168 fused_ordering(623) 00:14:34.168 fused_ordering(624) 00:14:34.168 fused_ordering(625) 00:14:34.168 fused_ordering(626) 00:14:34.168 fused_ordering(627) 00:14:34.168 fused_ordering(628) 00:14:34.168 fused_ordering(629) 00:14:34.168 fused_ordering(630) 00:14:34.168 fused_ordering(631) 00:14:34.168 fused_ordering(632) 00:14:34.168 fused_ordering(633) 00:14:34.168 fused_ordering(634) 00:14:34.168 fused_ordering(635) 00:14:34.168 fused_ordering(636) 00:14:34.168 fused_ordering(637) 00:14:34.168 fused_ordering(638) 00:14:34.168 fused_ordering(639) 00:14:34.168 fused_ordering(640) 00:14:34.168 fused_ordering(641) 00:14:34.168 fused_ordering(642) 00:14:34.168 fused_ordering(643) 00:14:34.168 fused_ordering(644) 00:14:34.168 fused_ordering(645) 00:14:34.168 fused_ordering(646) 00:14:34.168 fused_ordering(647) 00:14:34.168 fused_ordering(648) 00:14:34.168 fused_ordering(649) 00:14:34.168 fused_ordering(650) 00:14:34.168 fused_ordering(651) 00:14:34.168 fused_ordering(652) 00:14:34.168 fused_ordering(653) 00:14:34.168 fused_ordering(654) 00:14:34.168 fused_ordering(655) 00:14:34.168 fused_ordering(656) 00:14:34.168 fused_ordering(657) 00:14:34.168 fused_ordering(658) 00:14:34.168 fused_ordering(659) 00:14:34.168 fused_ordering(660) 00:14:34.168 fused_ordering(661) 00:14:34.168 fused_ordering(662) 00:14:34.168 fused_ordering(663) 00:14:34.168 fused_ordering(664) 00:14:34.168 fused_ordering(665) 00:14:34.168 fused_ordering(666) 00:14:34.168 fused_ordering(667) 00:14:34.168 fused_ordering(668) 00:14:34.168 fused_ordering(669) 00:14:34.168 fused_ordering(670) 00:14:34.168 fused_ordering(671) 00:14:34.168 fused_ordering(672) 00:14:34.168 fused_ordering(673) 00:14:34.168 fused_ordering(674) 00:14:34.168 fused_ordering(675) 00:14:34.168 fused_ordering(676) 00:14:34.168 fused_ordering(677) 00:14:34.168 fused_ordering(678) 00:14:34.168 fused_ordering(679) 00:14:34.168 fused_ordering(680) 00:14:34.168 fused_ordering(681) 00:14:34.168 fused_ordering(682) 00:14:34.168 fused_ordering(683) 00:14:34.168 fused_ordering(684) 00:14:34.168 fused_ordering(685) 00:14:34.168 fused_ordering(686) 00:14:34.168 fused_ordering(687) 00:14:34.168 fused_ordering(688) 00:14:34.168 fused_ordering(689) 00:14:34.168 fused_ordering(690) 00:14:34.168 fused_ordering(691) 00:14:34.168 fused_ordering(692) 00:14:34.168 fused_ordering(693) 00:14:34.168 fused_ordering(694) 00:14:34.168 fused_ordering(695) 00:14:34.168 fused_ordering(696) 00:14:34.168 fused_ordering(697) 00:14:34.168 fused_ordering(698) 00:14:34.168 fused_ordering(699) 00:14:34.168 fused_ordering(700) 00:14:34.168 fused_ordering(701) 00:14:34.168 fused_ordering(702) 00:14:34.168 fused_ordering(703) 00:14:34.168 fused_ordering(704) 00:14:34.168 fused_ordering(705) 00:14:34.168 fused_ordering(706) 00:14:34.168 fused_ordering(707) 00:14:34.168 fused_ordering(708) 00:14:34.168 fused_ordering(709) 00:14:34.168 fused_ordering(710) 00:14:34.168 fused_ordering(711) 00:14:34.168 fused_ordering(712) 00:14:34.168 fused_ordering(713) 00:14:34.168 fused_ordering(714) 00:14:34.168 fused_ordering(715) 00:14:34.168 fused_ordering(716) 00:14:34.168 fused_ordering(717) 00:14:34.168 fused_ordering(718) 00:14:34.168 fused_ordering(719) 00:14:34.168 fused_ordering(720) 00:14:34.168 fused_ordering(721) 00:14:34.168 fused_ordering(722) 00:14:34.168 fused_ordering(723) 00:14:34.168 fused_ordering(724) 00:14:34.168 fused_ordering(725) 00:14:34.168 fused_ordering(726) 00:14:34.168 fused_ordering(727) 00:14:34.168 fused_ordering(728) 00:14:34.168 fused_ordering(729) 00:14:34.168 fused_ordering(730) 00:14:34.168 fused_ordering(731) 00:14:34.168 fused_ordering(732) 00:14:34.168 fused_ordering(733) 00:14:34.168 fused_ordering(734) 00:14:34.168 fused_ordering(735) 00:14:34.168 fused_ordering(736) 00:14:34.168 fused_ordering(737) 00:14:34.168 fused_ordering(738) 00:14:34.168 fused_ordering(739) 00:14:34.168 fused_ordering(740) 00:14:34.168 fused_ordering(741) 00:14:34.168 fused_ordering(742) 00:14:34.168 fused_ordering(743) 00:14:34.168 fused_ordering(744) 00:14:34.168 fused_ordering(745) 00:14:34.168 fused_ordering(746) 00:14:34.168 fused_ordering(747) 00:14:34.168 fused_ordering(748) 00:14:34.168 fused_ordering(749) 00:14:34.168 fused_ordering(750) 00:14:34.168 fused_ordering(751) 00:14:34.168 fused_ordering(752) 00:14:34.169 fused_ordering(753) 00:14:34.169 fused_ordering(754) 00:14:34.169 fused_ordering(755) 00:14:34.169 fused_ordering(756) 00:14:34.169 fused_ordering(757) 00:14:34.169 fused_ordering(758) 00:14:34.169 fused_ordering(759) 00:14:34.169 fused_ordering(760) 00:14:34.169 fused_ordering(761) 00:14:34.169 fused_ordering(762) 00:14:34.169 fused_ordering(763) 00:14:34.169 fused_ordering(764) 00:14:34.169 fused_ordering(765) 00:14:34.169 fused_ordering(766) 00:14:34.169 fused_ordering(767) 00:14:34.169 fused_ordering(768) 00:14:34.169 fused_ordering(769) 00:14:34.169 fused_ordering(770) 00:14:34.169 fused_ordering(771) 00:14:34.169 fused_ordering(772) 00:14:34.169 fused_ordering(773) 00:14:34.169 fused_ordering(774) 00:14:34.169 fused_ordering(775) 00:14:34.169 fused_ordering(776) 00:14:34.169 fused_ordering(777) 00:14:34.169 fused_ordering(778) 00:14:34.169 fused_ordering(779) 00:14:34.169 fused_ordering(780) 00:14:34.169 fused_ordering(781) 00:14:34.169 fused_ordering(782) 00:14:34.169 fused_ordering(783) 00:14:34.169 fused_ordering(784) 00:14:34.169 fused_ordering(785) 00:14:34.169 fused_ordering(786) 00:14:34.169 fused_ordering(787) 00:14:34.169 fused_ordering(788) 00:14:34.169 fused_ordering(789) 00:14:34.169 fused_ordering(790) 00:14:34.169 fused_ordering(791) 00:14:34.169 fused_ordering(792) 00:14:34.169 fused_ordering(793) 00:14:34.169 fused_ordering(794) 00:14:34.169 fused_ordering(795) 00:14:34.169 fused_ordering(796) 00:14:34.169 fused_ordering(797) 00:14:34.169 fused_ordering(798) 00:14:34.169 fused_ordering(799) 00:14:34.169 fused_ordering(800) 00:14:34.169 fused_ordering(801) 00:14:34.169 fused_ordering(802) 00:14:34.169 fused_ordering(803) 00:14:34.169 fused_ordering(804) 00:14:34.169 fused_ordering(805) 00:14:34.169 fused_ordering(806) 00:14:34.169 fused_ordering(807) 00:14:34.169 fused_ordering(808) 00:14:34.169 fused_ordering(809) 00:14:34.169 fused_ordering(810) 00:14:34.169 fused_ordering(811) 00:14:34.169 fused_ordering(812) 00:14:34.169 fused_ordering(813) 00:14:34.169 fused_ordering(814) 00:14:34.169 fused_ordering(815) 00:14:34.169 fused_ordering(816) 00:14:34.169 fused_ordering(817) 00:14:34.169 fused_ordering(818) 00:14:34.169 fused_ordering(819) 00:14:34.169 fused_ordering(820) 00:14:34.739 fused_ordering(821) 00:14:34.739 fused_ordering(822) 00:14:34.739 fused_ordering(823) 00:14:34.739 fused_ordering(824) 00:14:34.739 fused_ordering(825) 00:14:34.739 fused_ordering(826) 00:14:34.740 fused_ordering(827) 00:14:34.740 fused_ordering(828) 00:14:34.740 fused_ordering(829) 00:14:34.740 fused_ordering(830) 00:14:34.740 fused_ordering(831) 00:14:34.740 fused_ordering(832) 00:14:34.740 fused_ordering(833) 00:14:34.740 fused_ordering(834) 00:14:34.740 fused_ordering(835) 00:14:34.740 fused_ordering(836) 00:14:34.740 fused_ordering(837) 00:14:34.740 fused_ordering(838) 00:14:34.740 fused_ordering(839) 00:14:34.740 fused_ordering(840) 00:14:34.740 fused_ordering(841) 00:14:34.740 fused_ordering(842) 00:14:34.740 fused_ordering(843) 00:14:34.740 fused_ordering(844) 00:14:34.740 fused_ordering(845) 00:14:34.740 fused_ordering(846) 00:14:34.740 fused_ordering(847) 00:14:34.740 fused_ordering(848) 00:14:34.740 fused_ordering(849) 00:14:34.740 fused_ordering(850) 00:14:34.740 fused_ordering(851) 00:14:34.740 fused_ordering(852) 00:14:34.740 fused_ordering(853) 00:14:34.740 fused_ordering(854) 00:14:34.740 fused_ordering(855) 00:14:34.740 fused_ordering(856) 00:14:34.740 fused_ordering(857) 00:14:34.740 fused_ordering(858) 00:14:34.740 fused_ordering(859) 00:14:34.740 fused_ordering(860) 00:14:34.740 fused_ordering(861) 00:14:34.740 fused_ordering(862) 00:14:34.740 fused_ordering(863) 00:14:34.740 fused_ordering(864) 00:14:34.740 fused_ordering(865) 00:14:34.740 fused_ordering(866) 00:14:34.740 fused_ordering(867) 00:14:34.740 fused_ordering(868) 00:14:34.740 fused_ordering(869) 00:14:34.740 fused_ordering(870) 00:14:34.740 fused_ordering(871) 00:14:34.740 fused_ordering(872) 00:14:34.740 fused_ordering(873) 00:14:34.740 fused_ordering(874) 00:14:34.740 fused_ordering(875) 00:14:34.740 fused_ordering(876) 00:14:34.740 fused_ordering(877) 00:14:34.740 fused_ordering(878) 00:14:34.740 fused_ordering(879) 00:14:34.740 fused_ordering(880) 00:14:34.740 fused_ordering(881) 00:14:34.740 fused_ordering(882) 00:14:34.740 fused_ordering(883) 00:14:34.740 fused_ordering(884) 00:14:34.740 fused_ordering(885) 00:14:34.740 fused_ordering(886) 00:14:34.740 fused_ordering(887) 00:14:34.740 fused_ordering(888) 00:14:34.740 fused_ordering(889) 00:14:34.740 fused_ordering(890) 00:14:34.740 fused_ordering(891) 00:14:34.740 fused_ordering(892) 00:14:34.740 fused_ordering(893) 00:14:34.740 fused_ordering(894) 00:14:34.740 fused_ordering(895) 00:14:34.740 fused_ordering(896) 00:14:34.740 fused_ordering(897) 00:14:34.740 fused_ordering(898) 00:14:34.740 fused_ordering(899) 00:14:34.740 fused_ordering(900) 00:14:34.740 fused_ordering(901) 00:14:34.740 fused_ordering(902) 00:14:34.740 fused_ordering(903) 00:14:34.740 fused_ordering(904) 00:14:34.740 fused_ordering(905) 00:14:34.740 fused_ordering(906) 00:14:34.740 fused_ordering(907) 00:14:34.740 fused_ordering(908) 00:14:34.740 fused_ordering(909) 00:14:34.740 fused_ordering(910) 00:14:34.740 fused_ordering(911) 00:14:34.740 fused_ordering(912) 00:14:34.740 fused_ordering(913) 00:14:34.740 fused_ordering(914) 00:14:34.740 fused_ordering(915) 00:14:34.740 fused_ordering(916) 00:14:34.740 fused_ordering(917) 00:14:34.740 fused_ordering(918) 00:14:34.740 fused_ordering(919) 00:14:34.740 fused_ordering(920) 00:14:34.740 fused_ordering(921) 00:14:34.740 fused_ordering(922) 00:14:34.740 fused_ordering(923) 00:14:34.740 fused_ordering(924) 00:14:34.740 fused_ordering(925) 00:14:34.740 fused_ordering(926) 00:14:34.740 fused_ordering(927) 00:14:34.740 fused_ordering(928) 00:14:34.740 fused_ordering(929) 00:14:34.740 fused_ordering(930) 00:14:34.740 fused_ordering(931) 00:14:34.740 fused_ordering(932) 00:14:34.740 fused_ordering(933) 00:14:34.740 fused_ordering(934) 00:14:34.740 fused_ordering(935) 00:14:34.740 fused_ordering(936) 00:14:34.740 fused_ordering(937) 00:14:34.740 fused_ordering(938) 00:14:34.740 fused_ordering(939) 00:14:34.740 fused_ordering(940) 00:14:34.740 fused_ordering(941) 00:14:34.740 fused_ordering(942) 00:14:34.740 fused_ordering(943) 00:14:34.740 fused_ordering(944) 00:14:34.740 fused_ordering(945) 00:14:34.740 fused_ordering(946) 00:14:34.740 fused_ordering(947) 00:14:34.740 fused_ordering(948) 00:14:34.740 fused_ordering(949) 00:14:34.740 fused_ordering(950) 00:14:34.740 fused_ordering(951) 00:14:34.740 fused_ordering(952) 00:14:34.740 fused_ordering(953) 00:14:34.740 fused_ordering(954) 00:14:34.740 fused_ordering(955) 00:14:34.740 fused_ordering(956) 00:14:34.740 fused_ordering(957) 00:14:34.740 fused_ordering(958) 00:14:34.740 fused_ordering(959) 00:14:34.740 fused_ordering(960) 00:14:34.740 fused_ordering(961) 00:14:34.740 fused_ordering(962) 00:14:34.740 fused_ordering(963) 00:14:34.740 fused_ordering(964) 00:14:34.740 fused_ordering(965) 00:14:34.740 fused_ordering(966) 00:14:34.740 fused_ordering(967) 00:14:34.740 fused_ordering(968) 00:14:34.740 fused_ordering(969) 00:14:34.740 fused_ordering(970) 00:14:34.740 fused_ordering(971) 00:14:34.740 fused_ordering(972) 00:14:34.740 fused_ordering(973) 00:14:34.740 fused_ordering(974) 00:14:34.740 fused_ordering(975) 00:14:34.740 fused_ordering(976) 00:14:34.740 fused_ordering(977) 00:14:34.740 fused_ordering(978) 00:14:34.740 fused_ordering(979) 00:14:34.740 fused_ordering(980) 00:14:34.740 fused_ordering(981) 00:14:34.740 fused_ordering(982) 00:14:34.740 fused_ordering(983) 00:14:34.740 fused_ordering(984) 00:14:34.740 fused_ordering(985) 00:14:34.740 fused_ordering(986) 00:14:34.740 fused_ordering(987) 00:14:34.740 fused_ordering(988) 00:14:34.740 fused_ordering(989) 00:14:34.740 fused_ordering(990) 00:14:34.740 fused_ordering(991) 00:14:34.740 fused_ordering(992) 00:14:34.740 fused_ordering(993) 00:14:34.740 fused_ordering(994) 00:14:34.740 fused_ordering(995) 00:14:34.740 fused_ordering(996) 00:14:34.740 fused_ordering(997) 00:14:34.740 fused_ordering(998) 00:14:34.740 fused_ordering(999) 00:14:34.740 fused_ordering(1000) 00:14:34.740 fused_ordering(1001) 00:14:34.740 fused_ordering(1002) 00:14:34.740 fused_ordering(1003) 00:14:34.740 fused_ordering(1004) 00:14:34.740 fused_ordering(1005) 00:14:34.740 fused_ordering(1006) 00:14:34.740 fused_ordering(1007) 00:14:34.740 fused_ordering(1008) 00:14:34.740 fused_ordering(1009) 00:14:34.740 fused_ordering(1010) 00:14:34.740 fused_ordering(1011) 00:14:34.740 fused_ordering(1012) 00:14:34.740 fused_ordering(1013) 00:14:34.740 fused_ordering(1014) 00:14:34.740 fused_ordering(1015) 00:14:34.740 fused_ordering(1016) 00:14:34.740 fused_ordering(1017) 00:14:34.740 fused_ordering(1018) 00:14:34.740 fused_ordering(1019) 00:14:34.740 fused_ordering(1020) 00:14:34.740 fused_ordering(1021) 00:14:34.740 fused_ordering(1022) 00:14:34.740 fused_ordering(1023) 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.740 rmmod nvme_tcp 00:14:34.740 rmmod nvme_fabrics 00:14:34.740 rmmod nvme_keyring 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2500859 ']' 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2500859 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 2500859 ']' 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 2500859 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:34.740 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2500859 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2500859' 00:14:35.001 killing process with pid 2500859 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 2500859 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 2500859 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.001 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.002 08:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.542 08:52:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.542 00:14:37.542 real 0m13.897s 00:14:37.542 user 0m7.941s 00:14:37.542 sys 0m7.655s 00:14:37.542 08:52:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:37.542 08:52:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.542 ************************************ 00:14:37.542 END TEST nvmf_fused_ordering 00:14:37.542 ************************************ 00:14:37.542 08:52:59 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:37.542 08:52:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:37.542 08:52:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:37.542 08:52:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.542 ************************************ 00:14:37.542 START TEST nvmf_delete_subsystem 00:14:37.542 ************************************ 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:37.542 * Looking for test storage... 00:14:37.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.542 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.543 08:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.131 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:44.132 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:44.132 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:44.132 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:44.132 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:44.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.778 ms 00:14:44.132 00:14:44.132 --- 10.0.0.2 ping statistics --- 00:14:44.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.132 rtt min/avg/max/mdev = 0.778/0.778/0.778/0.000 ms 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.469 ms 00:14:44.132 00:14:44.132 --- 10.0.0.1 ping statistics --- 00:14:44.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.132 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:44.132 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2505897 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2505897 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 2505897 ']' 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:44.394 08:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.394 [2024-06-09 08:53:06.771529] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:44.394 [2024-06-09 08:53:06.771581] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.394 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.394 [2024-06-09 08:53:06.836337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:44.394 [2024-06-09 08:53:06.900641] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.394 [2024-06-09 08:53:06.900677] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.394 [2024-06-09 08:53:06.900684] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.394 [2024-06-09 08:53:06.900690] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.394 [2024-06-09 08:53:06.900698] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.394 [2024-06-09 08:53:06.900834] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.394 [2024-06-09 08:53:06.900835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 [2024-06-09 08:53:07.571731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 [2024-06-09 08:53:07.587916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 NULL1 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 Delay0 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2505950 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:45.337 08:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:45.337 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.337 [2024-06-09 08:53:07.672499] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:47.252 08:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.252 08:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.252 08:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 [2024-06-09 08:53:09.887849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312040 is same with the state(5) to be set 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Read completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.513 starting I/O failed: -6 00:14:47.513 Write completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 [2024-06-09 08:53:09.892980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6c8000c00 is same with the state(5) to be set 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Write completed with error (sct=0, sc=8) 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 starting I/O failed: -6 00:14:47.514 Read completed with error (sct=0, sc=8) 00:14:47.514 [2024-06-09 08:53:09.893430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6c800c470 is same with the state(5) to be set 00:14:48.457 [2024-06-09 08:53:10.855477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1550 is same with the state(5) to be set 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 [2024-06-09 08:53:10.889980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312220 is same with the state(5) to be set 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 [2024-06-09 08:53:10.890295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311e60 is same with the state(5) to be set 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 [2024-06-09 08:53:10.894927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6c800c780 is same with the state(5) to be set 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Write completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 Read completed with error (sct=0, sc=8) 00:14:48.457 [2024-06-09 08:53:10.895085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6c800bfe0 is same with the state(5) to be set 00:14:48.457 Initializing NVMe Controllers 00:14:48.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.457 Controller IO queue size 128, less than required. 00:14:48.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:48.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:48.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:48.457 Initialization complete. Launching workers. 00:14:48.458 ======================================================== 00:14:48.458 Latency(us) 00:14:48.458 Device Information : IOPS MiB/s Average min max 00:14:48.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.85 0.08 906702.99 281.40 1006634.66 00:14:48.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.81 0.08 924176.63 449.03 1011899.86 00:14:48.458 ======================================================== 00:14:48.458 Total : 337.66 0.16 915645.99 281.40 1011899.86 00:14:48.458 00:14:48.458 [2024-06-09 08:53:10.895859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1550 (9): Bad file descriptor 00:14:48.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:48.458 08:53:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:48.458 08:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:48.458 08:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2505950 00:14:48.458 08:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:49.030 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:49.030 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2505950 00:14:49.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2505950) - No such process 00:14:49.030 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2505950 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2505950 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2505950 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:49.031 [2024-06-09 08:53:11.427631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2506735 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2506735 00:14:49.031 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:49.031 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.031 [2024-06-09 08:53:11.493299] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:49.602 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:49.602 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2506735 00:14:49.602 08:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.173 08:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.173 08:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2506735 00:14:50.173 08:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.434 08:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.434 08:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2506735 00:14:50.434 08:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.003 08:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.003 08:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2506735 00:14:51.003 08:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.572 08:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.572 08:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2506735 00:14:51.572 08:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.143 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:52.143 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2506735 00:14:52.143 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.143 Initializing NVMe Controllers 00:14:52.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.143 Controller IO queue size 128, less than required. 00:14:52.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:52.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:52.143 Initialization complete. Launching workers. 00:14:52.143 ======================================================== 00:14:52.143 Latency(us) 00:14:52.143 Device Information : IOPS MiB/s Average min max 00:14:52.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002399.38 1000119.40 1007397.57 00:14:52.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003249.05 1000354.77 1041400.27 00:14:52.144 ======================================================== 00:14:52.144 Total : 256.00 0.12 1002824.21 1000119.40 1041400.27 00:14:52.144 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2506735 00:14:52.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2506735) - No such process 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2506735 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.715 08:53:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.715 rmmod nvme_tcp 00:14:52.715 rmmod nvme_fabrics 00:14:52.715 rmmod nvme_keyring 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2505897 ']' 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2505897 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 2505897 ']' 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 2505897 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2505897 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2505897' 00:14:52.715 killing process with pid 2505897 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 2505897 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 2505897 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.715 08:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.259 08:53:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.259 00:14:55.259 real 0m17.747s 00:14:55.259 user 0m30.864s 00:14:55.259 sys 0m6.088s 00:14:55.259 08:53:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:55.259 08:53:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.259 ************************************ 00:14:55.259 END TEST nvmf_delete_subsystem 00:14:55.259 ************************************ 00:14:55.259 08:53:17 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:55.259 08:53:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:55.259 08:53:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:55.259 08:53:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.259 ************************************ 00:14:55.259 START TEST nvmf_ns_masking 00:14:55.259 ************************************ 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:55.259 * Looking for test storage... 00:14:55.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.259 08:53:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=5ab2ad44-44b9-47be-bd7d-5e55582ff4b8 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.260 08:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:01.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:01.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:01.893 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:01.893 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.893 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.894 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:15:02.155 00:15:02.155 --- 10.0.0.2 ping statistics --- 00:15:02.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.155 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:15:02.155 00:15:02.155 --- 10.0.0.1 ping statistics --- 00:15:02.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.155 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2511625 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2511625 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 2511625 ']' 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:02.155 08:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.415 [2024-06-09 08:53:24.716043] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:02.415 [2024-06-09 08:53:24.716106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.415 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.415 [2024-06-09 08:53:24.786455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.415 [2024-06-09 08:53:24.862193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.415 [2024-06-09 08:53:24.862230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.415 [2024-06-09 08:53:24.862238] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.416 [2024-06-09 08:53:24.862244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.416 [2024-06-09 08:53:24.862250] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.416 [2024-06-09 08:53:24.862438] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.416 [2024-06-09 08:53:24.862675] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.416 [2024-06-09 08:53:24.862511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.416 [2024-06-09 08:53:24.862676] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.988 08:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:02.988 08:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:15:02.988 08:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.988 08:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:02.988 08:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.988 08:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.988 08:53:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:03.249 [2024-06-09 08:53:25.680446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.249 08:53:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:03.249 08:53:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:03.249 08:53:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:03.509 Malloc1 00:15:03.509 08:53:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:03.509 Malloc2 00:15:03.509 08:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:03.770 08:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:04.030 08:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.030 [2024-06-09 08:53:26.533078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.030 08:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:04.030 08:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5ab2ad44-44b9-47be-bd7d-5e55582ff4b8 -a 10.0.0.2 -s 4420 -i 4 00:15:04.291 08:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:04.291 08:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:04.291 08:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.291 08:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:04.291 08:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:06.838 [ 0]:0x1 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=cb115f8f8746411d9f335623baebfc70 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ cb115f8f8746411d9f335623baebfc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.838 08:53:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:06.838 [ 0]:0x1 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=cb115f8f8746411d9f335623baebfc70 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ cb115f8f8746411d9f335623baebfc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:06.838 [ 1]:0x2 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa1ee167cc9b42099cd5e11e09eaeea0 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa1ee167cc9b42099cd5e11e09eaeea0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.838 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.099 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:07.099 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:07.099 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5ab2ad44-44b9-47be-bd7d-5e55582ff4b8 -a 10.0.0.2 -s 4420 -i 4 00:15:07.359 08:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:07.359 08:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:07.359 08:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.359 08:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:15:07.359 08:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:15:07.359 08:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:09.903 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:09.903 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:09.903 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.903 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:09.903 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.903 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:09.903 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:09.904 [ 0]:0x2 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.904 08:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa1ee167cc9b42099cd5e11e09eaeea0 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa1ee167cc9b42099cd5e11e09eaeea0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:09.904 [ 0]:0x1 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=cb115f8f8746411d9f335623baebfc70 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ cb115f8f8746411d9f335623baebfc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:09.904 [ 1]:0x2 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa1ee167cc9b42099cd5e11e09eaeea0 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa1ee167cc9b42099cd5e11e09eaeea0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.904 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:10.165 [ 0]:0x2 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa1ee167cc9b42099cd5e11e09eaeea0 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa1ee167cc9b42099cd5e11e09eaeea0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:10.165 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.166 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:10.426 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:10.426 08:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5ab2ad44-44b9-47be-bd7d-5e55582ff4b8 -a 10.0.0.2 -s 4420 -i 4 00:15:10.685 08:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:10.685 08:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:10.685 08:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.685 08:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:15:10.685 08:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:15:10.685 08:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:12.599 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:12.599 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:12.599 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.599 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:15:12.599 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.599 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:12.599 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:12.599 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:12.860 [ 0]:0x1 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=cb115f8f8746411d9f335623baebfc70 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ cb115f8f8746411d9f335623baebfc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.860 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:12.861 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:12.861 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:12.861 [ 1]:0x2 00:15:12.861 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.861 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:12.861 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa1ee167cc9b42099cd5e11e09eaeea0 00:15:12.861 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa1ee167cc9b42099cd5e11e09eaeea0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.861 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:13.122 [ 0]:0x2 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa1ee167cc9b42099cd5e11e09eaeea0 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa1ee167cc9b42099cd5e11e09eaeea0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:13.122 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:13.123 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:13.384 [2024-06-09 08:53:35.818126] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:13.384 request: 00:15:13.384 { 00:15:13.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.384 "nsid": 2, 00:15:13.384 "host": "nqn.2016-06.io.spdk:host1", 00:15:13.384 "method": "nvmf_ns_remove_host", 00:15:13.384 "req_id": 1 00:15:13.384 } 00:15:13.384 Got JSON-RPC error response 00:15:13.384 response: 00:15:13.384 { 00:15:13.384 "code": -32602, 00:15:13.384 "message": "Invalid parameters" 00:15:13.384 } 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:13.384 [ 0]:0x2 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:13.384 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.645 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa1ee167cc9b42099cd5e11e09eaeea0 00:15:13.645 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa1ee167cc9b42099cd5e11e09eaeea0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.645 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:13.645 08:53:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.645 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.645 rmmod nvme_tcp 00:15:13.906 rmmod nvme_fabrics 00:15:13.906 rmmod nvme_keyring 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2511625 ']' 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2511625 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 2511625 ']' 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 2511625 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2511625 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2511625' 00:15:13.906 killing process with pid 2511625 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 2511625 00:15:13.906 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 2511625 00:15:14.167 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.167 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.167 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.167 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.167 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.167 08:53:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.167 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.167 08:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.079 08:53:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.079 00:15:16.079 real 0m21.142s 00:15:16.079 user 0m50.869s 00:15:16.079 sys 0m6.822s 00:15:16.079 08:53:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:16.079 08:53:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 ************************************ 00:15:16.079 END TEST nvmf_ns_masking 00:15:16.079 ************************************ 00:15:16.079 08:53:38 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:16.079 08:53:38 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:16.079 08:53:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:16.079 08:53:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:16.079 08:53:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 ************************************ 00:15:16.079 START TEST nvmf_nvme_cli 00:15:16.079 ************************************ 00:15:16.079 08:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:16.341 * Looking for test storage... 00:15:16.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.342 08:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:22.937 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:22.937 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.937 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:22.938 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:22.938 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.938 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:23.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:15:23.199 00:15:23.199 --- 10.0.0.2 ping statistics --- 00:15:23.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.199 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.487 ms 00:15:23.199 00:15:23.199 --- 10.0.0.1 ping statistics --- 00:15:23.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.199 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:23.199 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2518164 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2518164 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 2518164 ']' 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:23.460 08:53:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.460 [2024-06-09 08:53:45.822488] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:23.460 [2024-06-09 08:53:45.822549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.460 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.460 [2024-06-09 08:53:45.893124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.460 [2024-06-09 08:53:45.969202] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.460 [2024-06-09 08:53:45.969239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.460 [2024-06-09 08:53:45.969246] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.460 [2024-06-09 08:53:45.969253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.460 [2024-06-09 08:53:45.969259] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.460 [2024-06-09 08:53:45.969394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.460 [2024-06-09 08:53:45.969527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.460 [2024-06-09 08:53:45.969578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.460 [2024-06-09 08:53:45.969579] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 [2024-06-09 08:53:46.648914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 Malloc0 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 Malloc1 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 [2024-06-09 08:53:46.738742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:24.430 00:15:24.430 Discovery Log Number of Records 2, Generation counter 2 00:15:24.430 =====Discovery Log Entry 0====== 00:15:24.430 trtype: tcp 00:15:24.430 adrfam: ipv4 00:15:24.430 subtype: current discovery subsystem 00:15:24.430 treq: not required 00:15:24.430 portid: 0 00:15:24.430 trsvcid: 4420 00:15:24.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:24.430 traddr: 10.0.0.2 00:15:24.430 eflags: explicit discovery connections, duplicate discovery information 00:15:24.430 sectype: none 00:15:24.430 =====Discovery Log Entry 1====== 00:15:24.430 trtype: tcp 00:15:24.430 adrfam: ipv4 00:15:24.430 subtype: nvme subsystem 00:15:24.430 treq: not required 00:15:24.430 portid: 0 00:15:24.430 trsvcid: 4420 00:15:24.430 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:24.430 traddr: 10.0.0.2 00:15:24.430 eflags: none 00:15:24.430 sectype: none 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:24.430 08:53:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:25.817 08:53:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:25.817 08:53:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:15:25.817 08:53:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.817 08:53:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:15:25.817 08:53:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:15:25.817 08:53:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:28.364 /dev/nvme0n1 ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:28.364 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.625 08:53:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.625 rmmod nvme_tcp 00:15:28.625 rmmod nvme_fabrics 00:15:28.625 rmmod nvme_keyring 00:15:28.625 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.625 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:28.625 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:28.625 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2518164 ']' 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2518164 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 2518164 ']' 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 2518164 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2518164 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2518164' 00:15:28.626 killing process with pid 2518164 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 2518164 00:15:28.626 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 2518164 00:15:28.886 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.886 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.886 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.886 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.886 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.886 08:53:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.886 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.886 08:53:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.828 08:53:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.828 00:15:30.828 real 0m14.725s 00:15:30.828 user 0m23.139s 00:15:30.828 sys 0m5.795s 00:15:30.828 08:53:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:30.828 08:53:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:30.828 ************************************ 00:15:30.828 END TEST nvmf_nvme_cli 00:15:30.828 ************************************ 00:15:30.828 08:53:53 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:15:30.828 08:53:53 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:30.828 08:53:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:30.828 08:53:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:30.828 08:53:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.089 ************************************ 00:15:31.089 START TEST nvmf_host_management 00:15:31.089 ************************************ 00:15:31.089 08:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:31.089 * Looking for test storage... 00:15:31.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.089 08:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.089 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:31.089 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.089 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.089 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.089 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:31.090 08:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:39.243 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:39.243 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:39.243 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:39.243 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:39.244 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:15:39.244 00:15:39.244 --- 10.0.0.2 ping statistics --- 00:15:39.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.244 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:15:39.244 00:15:39.244 --- 10.0.0.1 ping statistics --- 00:15:39.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.244 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2523531 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2523531 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2523531 ']' 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:39.244 08:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 [2024-06-09 08:54:00.795478] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:39.244 [2024-06-09 08:54:00.795531] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.244 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.244 [2024-06-09 08:54:00.882925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.244 [2024-06-09 08:54:00.979713] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.244 [2024-06-09 08:54:00.979771] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.244 [2024-06-09 08:54:00.979779] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.244 [2024-06-09 08:54:00.979786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.244 [2024-06-09 08:54:00.979792] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.244 [2024-06-09 08:54:00.979925] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.244 [2024-06-09 08:54:00.980093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.244 [2024-06-09 08:54:00.980258] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.244 [2024-06-09 08:54:00.980259] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 [2024-06-09 08:54:01.624036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 Malloc0 00:15:39.244 [2024-06-09 08:54:01.687368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:39.244 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2523807 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2523807 /var/tmp/bdevperf.sock 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2523807 ']' 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.245 { 00:15:39.245 "params": { 00:15:39.245 "name": "Nvme$subsystem", 00:15:39.245 "trtype": "$TEST_TRANSPORT", 00:15:39.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.245 "adrfam": "ipv4", 00:15:39.245 "trsvcid": "$NVMF_PORT", 00:15:39.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.245 "hdgst": ${hdgst:-false}, 00:15:39.245 "ddgst": ${ddgst:-false} 00:15:39.245 }, 00:15:39.245 "method": "bdev_nvme_attach_controller" 00:15:39.245 } 00:15:39.245 EOF 00:15:39.245 )") 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:39.245 08:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.245 "params": { 00:15:39.245 "name": "Nvme0", 00:15:39.245 "trtype": "tcp", 00:15:39.245 "traddr": "10.0.0.2", 00:15:39.245 "adrfam": "ipv4", 00:15:39.245 "trsvcid": "4420", 00:15:39.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:39.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:39.245 "hdgst": false, 00:15:39.245 "ddgst": false 00:15:39.245 }, 00:15:39.245 "method": "bdev_nvme_attach_controller" 00:15:39.245 }' 00:15:39.245 [2024-06-09 08:54:01.787814] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:39.245 [2024-06-09 08:54:01.787869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523807 ] 00:15:39.506 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.506 [2024-06-09 08:54:01.847022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.506 [2024-06-09 08:54:01.911593] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.766 Running I/O for 10 seconds... 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:40.027 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=345 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 345 -ge 100 ']' 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.290 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:40.290 [2024-06-09 08:54:02.639016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.290 [2024-06-09 08:54:02.639053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.290 [2024-06-09 08:54:02.639070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.290 [2024-06-09 08:54:02.639079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.290 [2024-06-09 08:54:02.639088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.290 [2024-06-09 08:54:02.639095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.290 [2024-06-09 08:54:02.639104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.290 [2024-06-09 08:54:02.639111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.290 [2024-06-09 08:54:02.639120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.290 [2024-06-09 08:54:02.639127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.290 [2024-06-09 08:54:02.639142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.290 [2024-06-09 08:54:02.639149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.290 [2024-06-09 08:54:02.639158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.291 [2024-06-09 08:54:02.639773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.291 [2024-06-09 08:54:02.639782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.639991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.639999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.640008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.640015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.640024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.640031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.640040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.640047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.640057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.640064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.640073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.640080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.640089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.292 [2024-06-09 08:54:02.640096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.640146] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b1e3c0 was disconnected and freed. reset controller. 00:15:40.292 [2024-06-09 08:54:02.641357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:40.292 task offset: 55552 on job bdev=Nvme0n1 fails 00:15:40.292 00:15:40.292 Latency(us) 00:15:40.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.292 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:40.292 Job: Nvme0n1 ended in about 0.46 seconds with error 00:15:40.292 Verification LBA range: start 0x0 length 0x400 00:15:40.292 Nvme0n1 : 0.46 841.34 52.58 140.22 0.00 63530.48 1843.20 56797.87 00:15:40.292 =================================================================================================================== 00:15:40.292 Total : 841.34 52.58 140.22 0.00 63530.48 1843.20 56797.87 00:15:40.292 [2024-06-09 08:54:02.643353] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:40.292 [2024-06-09 08:54:02.643374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5030 (9): Bad file descriptor 00:15:40.292 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.292 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:40.292 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.292 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:40.292 [2024-06-09 08:54:02.650198] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:15:40.292 [2024-06-09 08:54:02.650337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:40.292 [2024-06-09 08:54:02.650362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.292 [2024-06-09 08:54:02.650375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:15:40.292 [2024-06-09 08:54:02.650382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:15:40.292 [2024-06-09 08:54:02.650390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:15:40.292 [2024-06-09 08:54:02.650397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5030 00:15:40.292 [2024-06-09 08:54:02.650424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5030 (9): Bad file descriptor 00:15:40.292 [2024-06-09 08:54:02.650435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:40.292 [2024-06-09 08:54:02.650442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:40.292 [2024-06-09 08:54:02.650450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:40.292 [2024-06-09 08:54:02.650463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:40.292 08:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.292 08:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2523807 00:15:41.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2523807) - No such process 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:41.235 { 00:15:41.235 "params": { 00:15:41.235 "name": "Nvme$subsystem", 00:15:41.235 "trtype": "$TEST_TRANSPORT", 00:15:41.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:41.235 "adrfam": "ipv4", 00:15:41.235 "trsvcid": "$NVMF_PORT", 00:15:41.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:41.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:41.235 "hdgst": ${hdgst:-false}, 00:15:41.235 "ddgst": ${ddgst:-false} 00:15:41.235 }, 00:15:41.235 "method": "bdev_nvme_attach_controller" 00:15:41.235 } 00:15:41.235 EOF 00:15:41.235 )") 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:41.235 08:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:41.235 "params": { 00:15:41.235 "name": "Nvme0", 00:15:41.235 "trtype": "tcp", 00:15:41.235 "traddr": "10.0.0.2", 00:15:41.235 "adrfam": "ipv4", 00:15:41.235 "trsvcid": "4420", 00:15:41.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:41.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:41.235 "hdgst": false, 00:15:41.235 "ddgst": false 00:15:41.235 }, 00:15:41.235 "method": "bdev_nvme_attach_controller" 00:15:41.235 }' 00:15:41.235 [2024-06-09 08:54:03.713558] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:41.235 [2024-06-09 08:54:03.713647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524289 ] 00:15:41.235 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.235 [2024-06-09 08:54:03.776720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.496 [2024-06-09 08:54:03.841081] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.756 Running I/O for 1 seconds... 00:15:42.704 00:15:42.704 Latency(us) 00:15:42.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.704 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:42.704 Verification LBA range: start 0x0 length 0x400 00:15:42.704 Nvme0n1 : 1.01 959.76 59.99 0.00 0.00 65532.31 1426.77 57234.77 00:15:42.704 =================================================================================================================== 00:15:42.704 Total : 959.76 59.99 0.00 0.00 65532.31 1426.77 57234.77 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.704 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.704 rmmod nvme_tcp 00:15:42.704 rmmod nvme_fabrics 00:15:42.965 rmmod nvme_keyring 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2523531 ']' 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2523531 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 2523531 ']' 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 2523531 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2523531 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2523531' 00:15:42.965 killing process with pid 2523531 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 2523531 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 2523531 00:15:42.965 [2024-06-09 08:54:05.456625] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.965 08:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.509 08:54:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.509 08:54:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:45.509 00:15:45.509 real 0m14.126s 00:15:45.509 user 0m22.424s 00:15:45.509 sys 0m6.363s 00:15:45.509 08:54:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:45.509 08:54:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:45.509 ************************************ 00:15:45.509 END TEST nvmf_host_management 00:15:45.509 ************************************ 00:15:45.509 08:54:07 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:45.509 08:54:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:45.509 08:54:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:45.509 08:54:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.509 ************************************ 00:15:45.509 START TEST nvmf_lvol 00:15:45.509 ************************************ 00:15:45.509 08:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:45.509 * Looking for test storage... 00:15:45.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.510 08:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:52.143 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.143 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.143 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.143 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.143 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.143 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.143 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.143 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:52.144 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:52.144 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:52.144 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:52.144 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.144 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.405 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.405 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.405 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:52.405 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.405 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.405 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:52.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:15:52.666 00:15:52.666 --- 10.0.0.2 ping statistics --- 00:15:52.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.666 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:15:52.666 00:15:52.666 --- 10.0.0.1 ping statistics --- 00:15:52.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.666 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.666 08:54:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2529123 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2529123 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 2529123 ']' 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:52.666 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:52.666 [2024-06-09 08:54:15.087693] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:52.666 [2024-06-09 08:54:15.087769] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.666 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.666 [2024-06-09 08:54:15.161085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.927 [2024-06-09 08:54:15.228635] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.927 [2024-06-09 08:54:15.228676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.927 [2024-06-09 08:54:15.228683] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.927 [2024-06-09 08:54:15.228690] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.927 [2024-06-09 08:54:15.228695] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.928 [2024-06-09 08:54:15.228841] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.928 [2024-06-09 08:54:15.228956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.928 [2024-06-09 08:54:15.228959] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.499 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:53.499 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:15:53.499 08:54:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.499 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:53.499 08:54:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:53.499 08:54:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.499 08:54:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.499 [2024-06-09 08:54:16.044611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.760 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.760 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:53.760 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.021 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:54.021 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:54.281 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:54.281 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3eb7c08f-42d1-49d1-bc7a-7aa144468628 00:15:54.281 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3eb7c08f-42d1-49d1-bc7a-7aa144468628 lvol 20 00:15:54.542 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d3c2ed3d-54a2-4c7c-b9cc-1e49a829a1b5 00:15:54.542 08:54:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:54.803 08:54:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d3c2ed3d-54a2-4c7c-b9cc-1e49a829a1b5 00:15:54.803 08:54:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:55.064 [2024-06-09 08:54:17.418769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.064 08:54:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:55.064 08:54:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2529747 00:15:55.064 08:54:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:55.064 08:54:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:55.325 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.268 08:54:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d3c2ed3d-54a2-4c7c-b9cc-1e49a829a1b5 MY_SNAPSHOT 00:15:56.268 08:54:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b94bbcc4-8eda-4c73-b271-9b84ede4be51 00:15:56.268 08:54:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d3c2ed3d-54a2-4c7c-b9cc-1e49a829a1b5 30 00:15:56.530 08:54:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b94bbcc4-8eda-4c73-b271-9b84ede4be51 MY_CLONE 00:15:56.791 08:54:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=44ca0089-ac96-44f5-8b00-fc6496268f9c 00:15:56.791 08:54:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 44ca0089-ac96-44f5-8b00-fc6496268f9c 00:15:57.053 08:54:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2529747 00:16:07.057 Initializing NVMe Controllers 00:16:07.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:07.057 Controller IO queue size 128, less than required. 00:16:07.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:07.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:07.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:07.057 Initialization complete. Launching workers. 00:16:07.057 ======================================================== 00:16:07.057 Latency(us) 00:16:07.057 Device Information : IOPS MiB/s Average min max 00:16:07.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12543.90 49.00 10207.52 1571.87 57501.23 00:16:07.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18141.40 70.86 7056.85 1362.46 40373.23 00:16:07.057 ======================================================== 00:16:07.057 Total : 30685.30 119.86 8344.82 1362.46 57501.23 00:16:07.057 00:16:07.057 08:54:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d3c2ed3d-54a2-4c7c-b9cc-1e49a829a1b5 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3eb7c08f-42d1-49d1-bc7a-7aa144468628 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.057 rmmod nvme_tcp 00:16:07.057 rmmod nvme_fabrics 00:16:07.057 rmmod nvme_keyring 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2529123 ']' 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2529123 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 2529123 ']' 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 2529123 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2529123 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2529123' 00:16:07.057 killing process with pid 2529123 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 2529123 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 2529123 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.057 08:54:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.444 00:16:08.444 real 0m23.110s 00:16:08.444 user 1m3.413s 00:16:08.444 sys 0m7.777s 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:08.444 ************************************ 00:16:08.444 END TEST nvmf_lvol 00:16:08.444 ************************************ 00:16:08.444 08:54:30 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:08.444 08:54:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:08.444 08:54:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:08.444 08:54:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.444 ************************************ 00:16:08.444 START TEST nvmf_lvs_grow 00:16:08.444 ************************************ 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:08.444 * Looking for test storage... 00:16:08.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.444 08:54:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.445 08:54:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.075 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:15.075 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:15.075 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:15.075 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:15.075 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:15.075 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:15.076 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:15.076 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.076 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:15.337 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:15.337 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:15.337 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:16:15.598 00:16:15.598 --- 10.0.0.2 ping statistics --- 00:16:15.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.598 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:16:15.598 00:16:15.598 --- 10.0.0.1 ping statistics --- 00:16:15.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.598 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2535906 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2535906 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 2535906 ']' 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:15.598 08:54:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.599 08:54:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:15.599 08:54:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.599 [2024-06-09 08:54:38.057050] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:15.599 [2024-06-09 08:54:38.057110] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.599 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.599 [2024-06-09 08:54:38.127460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.864 [2024-06-09 08:54:38.202418] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.864 [2024-06-09 08:54:38.202457] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.864 [2024-06-09 08:54:38.202465] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.864 [2024-06-09 08:54:38.202471] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.864 [2024-06-09 08:54:38.202477] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.864 [2024-06-09 08:54:38.202503] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.480 08:54:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:16.480 08:54:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:16:16.480 08:54:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:16.480 08:54:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:16.480 08:54:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:16.480 08:54:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.480 08:54:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:16.741 [2024-06-09 08:54:39.074155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:16.741 ************************************ 00:16:16.741 START TEST lvs_grow_clean 00:16:16.741 ************************************ 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.741 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:17.002 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:17.002 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:17.002 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:17.002 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:17.002 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:17.264 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:17.264 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:17.264 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 lvol 150 00:16:17.264 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f43f6795-9fea-46ff-b485-fe1bbde69cb0 00:16:17.264 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:17.524 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:17.524 [2024-06-09 08:54:39.961517] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:17.524 [2024-06-09 08:54:39.961570] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:17.524 true 00:16:17.524 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:17.524 08:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:17.785 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:17.785 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:17.785 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f43f6795-9fea-46ff-b485-fe1bbde69cb0 00:16:18.045 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:18.045 [2024-06-09 08:54:40.587433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.045 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2536538 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2536538 /var/tmp/bdevperf.sock 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 2536538 ']' 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:18.307 08:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:18.307 [2024-06-09 08:54:40.763552] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:18.307 [2024-06-09 08:54:40.763596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536538 ] 00:16:18.307 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.307 [2024-06-09 08:54:40.831831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.568 [2024-06-09 08:54:40.895894] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.141 08:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:19.141 08:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:16:19.141 08:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:19.402 Nvme0n1 00:16:19.403 08:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:19.664 [ 00:16:19.664 { 00:16:19.664 "name": "Nvme0n1", 00:16:19.664 "aliases": [ 00:16:19.664 "f43f6795-9fea-46ff-b485-fe1bbde69cb0" 00:16:19.664 ], 00:16:19.664 "product_name": "NVMe disk", 00:16:19.664 "block_size": 4096, 00:16:19.664 "num_blocks": 38912, 00:16:19.664 "uuid": "f43f6795-9fea-46ff-b485-fe1bbde69cb0", 00:16:19.664 "assigned_rate_limits": { 00:16:19.664 "rw_ios_per_sec": 0, 00:16:19.664 "rw_mbytes_per_sec": 0, 00:16:19.664 "r_mbytes_per_sec": 0, 00:16:19.664 "w_mbytes_per_sec": 0 00:16:19.664 }, 00:16:19.664 "claimed": false, 00:16:19.664 "zoned": false, 00:16:19.664 "supported_io_types": { 00:16:19.664 "read": true, 00:16:19.664 "write": true, 00:16:19.664 "unmap": true, 00:16:19.664 "write_zeroes": true, 00:16:19.664 "flush": true, 00:16:19.664 "reset": true, 00:16:19.664 "compare": true, 00:16:19.664 "compare_and_write": true, 00:16:19.665 "abort": true, 00:16:19.665 "nvme_admin": true, 00:16:19.665 "nvme_io": true 00:16:19.665 }, 00:16:19.665 "memory_domains": [ 00:16:19.665 { 00:16:19.665 "dma_device_id": "system", 00:16:19.665 "dma_device_type": 1 00:16:19.665 } 00:16:19.665 ], 00:16:19.665 "driver_specific": { 00:16:19.665 "nvme": [ 00:16:19.665 { 00:16:19.665 "trid": { 00:16:19.665 "trtype": "TCP", 00:16:19.665 "adrfam": "IPv4", 00:16:19.665 "traddr": "10.0.0.2", 00:16:19.665 "trsvcid": "4420", 00:16:19.665 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:19.665 }, 00:16:19.665 "ctrlr_data": { 00:16:19.665 "cntlid": 1, 00:16:19.665 "vendor_id": "0x8086", 00:16:19.665 "model_number": "SPDK bdev Controller", 00:16:19.665 "serial_number": "SPDK0", 00:16:19.665 "firmware_revision": "24.09", 00:16:19.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:19.665 "oacs": { 00:16:19.665 "security": 0, 00:16:19.665 "format": 0, 00:16:19.665 "firmware": 0, 00:16:19.665 "ns_manage": 0 00:16:19.665 }, 00:16:19.665 "multi_ctrlr": true, 00:16:19.665 "ana_reporting": false 00:16:19.665 }, 00:16:19.665 "vs": { 00:16:19.665 "nvme_version": "1.3" 00:16:19.665 }, 00:16:19.665 "ns_data": { 00:16:19.665 "id": 1, 00:16:19.665 "can_share": true 00:16:19.665 } 00:16:19.665 } 00:16:19.665 ], 00:16:19.665 "mp_policy": "active_passive" 00:16:19.665 } 00:16:19.665 } 00:16:19.665 ] 00:16:19.665 08:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2536874 00:16:19.665 08:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:19.665 08:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:19.665 Running I/O for 10 seconds... 00:16:20.608 Latency(us) 00:16:20.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.608 Nvme0n1 : 1.00 17675.00 69.04 0.00 0.00 0.00 0.00 0.00 00:16:20.608 =================================================================================================================== 00:16:20.608 Total : 17675.00 69.04 0.00 0.00 0.00 0.00 0.00 00:16:20.608 00:16:21.551 08:54:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:21.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.551 Nvme0n1 : 2.00 17816.50 69.60 0.00 0.00 0.00 0.00 0.00 00:16:21.551 =================================================================================================================== 00:16:21.551 Total : 17816.50 69.60 0.00 0.00 0.00 0.00 0.00 00:16:21.551 00:16:21.813 true 00:16:21.813 08:54:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:21.813 08:54:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:21.813 08:54:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:21.813 08:54:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:21.813 08:54:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2536874 00:16:22.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.756 Nvme0n1 : 3.00 17872.33 69.81 0.00 0.00 0.00 0.00 0.00 00:16:22.756 =================================================================================================================== 00:16:22.756 Total : 17872.33 69.81 0.00 0.00 0.00 0.00 0.00 00:16:22.756 00:16:23.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.700 Nvme0n1 : 4.00 17938.75 70.07 0.00 0.00 0.00 0.00 0.00 00:16:23.700 =================================================================================================================== 00:16:23.700 Total : 17938.75 70.07 0.00 0.00 0.00 0.00 0.00 00:16:23.700 00:16:24.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.643 Nvme0n1 : 5.00 17968.20 70.19 0.00 0.00 0.00 0.00 0.00 00:16:24.643 =================================================================================================================== 00:16:24.643 Total : 17968.20 70.19 0.00 0.00 0.00 0.00 0.00 00:16:24.643 00:16:25.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.587 Nvme0n1 : 6.00 17996.50 70.30 0.00 0.00 0.00 0.00 0.00 00:16:25.587 =================================================================================================================== 00:16:25.587 Total : 17996.50 70.30 0.00 0.00 0.00 0.00 0.00 00:16:25.587 00:16:26.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.973 Nvme0n1 : 7.00 18009.29 70.35 0.00 0.00 0.00 0.00 0.00 00:16:26.973 =================================================================================================================== 00:16:26.973 Total : 18009.29 70.35 0.00 0.00 0.00 0.00 0.00 00:16:26.973 00:16:27.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.915 Nvme0n1 : 8.00 18024.25 70.41 0.00 0.00 0.00 0.00 0.00 00:16:27.915 =================================================================================================================== 00:16:27.915 Total : 18024.25 70.41 0.00 0.00 0.00 0.00 0.00 00:16:27.915 00:16:28.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.858 Nvme0n1 : 9.00 18032.11 70.44 0.00 0.00 0.00 0.00 0.00 00:16:28.858 =================================================================================================================== 00:16:28.858 Total : 18032.11 70.44 0.00 0.00 0.00 0.00 0.00 00:16:28.858 00:16:29.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.801 Nvme0n1 : 10.00 18046.50 70.49 0.00 0.00 0.00 0.00 0.00 00:16:29.801 =================================================================================================================== 00:16:29.801 Total : 18046.50 70.49 0.00 0.00 0.00 0.00 0.00 00:16:29.801 00:16:29.801 00:16:29.801 Latency(us) 00:16:29.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.801 Nvme0n1 : 10.00 18052.71 70.52 0.00 0.00 7086.56 5515.95 17694.72 00:16:29.801 =================================================================================================================== 00:16:29.801 Total : 18052.71 70.52 0.00 0.00 7086.56 5515.95 17694.72 00:16:29.801 0 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2536538 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 2536538 ']' 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 2536538 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2536538 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2536538' 00:16:29.801 killing process with pid 2536538 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 2536538 00:16:29.801 Received shutdown signal, test time was about 10.000000 seconds 00:16:29.801 00:16:29.801 Latency(us) 00:16:29.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.801 =================================================================================================================== 00:16:29.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 2536538 00:16:29.801 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:30.063 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:30.324 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:30.324 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:30.324 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:30.324 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:30.324 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:30.586 [2024-06-09 08:54:52.972952] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:30.586 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:30.586 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:16:30.586 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:30.586 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.586 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:30.586 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.586 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:30.586 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.586 08:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:30.586 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.586 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:30.586 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:30.847 request: 00:16:30.847 { 00:16:30.847 "uuid": "5f56d187-1b89-4b8a-a0a8-dfbc2c437662", 00:16:30.847 "method": "bdev_lvol_get_lvstores", 00:16:30.847 "req_id": 1 00:16:30.847 } 00:16:30.847 Got JSON-RPC error response 00:16:30.847 response: 00:16:30.847 { 00:16:30.847 "code": -19, 00:16:30.847 "message": "No such device" 00:16:30.847 } 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:30.847 aio_bdev 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f43f6795-9fea-46ff-b485-fe1bbde69cb0 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=f43f6795-9fea-46ff-b485-fe1bbde69cb0 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:30.847 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:31.108 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f43f6795-9fea-46ff-b485-fe1bbde69cb0 -t 2000 00:16:31.108 [ 00:16:31.108 { 00:16:31.108 "name": "f43f6795-9fea-46ff-b485-fe1bbde69cb0", 00:16:31.108 "aliases": [ 00:16:31.108 "lvs/lvol" 00:16:31.108 ], 00:16:31.108 "product_name": "Logical Volume", 00:16:31.108 "block_size": 4096, 00:16:31.108 "num_blocks": 38912, 00:16:31.108 "uuid": "f43f6795-9fea-46ff-b485-fe1bbde69cb0", 00:16:31.108 "assigned_rate_limits": { 00:16:31.108 "rw_ios_per_sec": 0, 00:16:31.108 "rw_mbytes_per_sec": 0, 00:16:31.108 "r_mbytes_per_sec": 0, 00:16:31.108 "w_mbytes_per_sec": 0 00:16:31.109 }, 00:16:31.109 "claimed": false, 00:16:31.109 "zoned": false, 00:16:31.109 "supported_io_types": { 00:16:31.109 "read": true, 00:16:31.109 "write": true, 00:16:31.109 "unmap": true, 00:16:31.109 "write_zeroes": true, 00:16:31.109 "flush": false, 00:16:31.109 "reset": true, 00:16:31.109 "compare": false, 00:16:31.109 "compare_and_write": false, 00:16:31.109 "abort": false, 00:16:31.109 "nvme_admin": false, 00:16:31.109 "nvme_io": false 00:16:31.109 }, 00:16:31.109 "driver_specific": { 00:16:31.109 "lvol": { 00:16:31.109 "lvol_store_uuid": "5f56d187-1b89-4b8a-a0a8-dfbc2c437662", 00:16:31.109 "base_bdev": "aio_bdev", 00:16:31.109 "thin_provision": false, 00:16:31.109 "num_allocated_clusters": 38, 00:16:31.109 "snapshot": false, 00:16:31.109 "clone": false, 00:16:31.109 "esnap_clone": false 00:16:31.109 } 00:16:31.109 } 00:16:31.109 } 00:16:31.109 ] 00:16:31.109 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:16:31.109 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:31.109 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:31.370 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:31.370 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:31.370 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:31.651 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:31.651 08:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f43f6795-9fea-46ff-b485-fe1bbde69cb0 00:16:31.651 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f56d187-1b89-4b8a-a0a8-dfbc2c437662 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:31.983 00:16:31.983 real 0m15.311s 00:16:31.983 user 0m15.036s 00:16:31.983 sys 0m1.223s 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:31.983 ************************************ 00:16:31.983 END TEST lvs_grow_clean 00:16:31.983 ************************************ 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:31.983 ************************************ 00:16:31.983 START TEST lvs_grow_dirty 00:16:31.983 ************************************ 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:31.983 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:32.243 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:32.243 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:32.505 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:32.505 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:32.505 08:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:32.505 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:32.505 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:32.505 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b37f9e66-fd82-4820-8058-2ca23a6aa443 lvol 150 00:16:32.766 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6ac65b61-fc43-4fac-a871-c99a20d04a44 00:16:32.766 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:32.766 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:32.766 [2024-06-09 08:54:55.287776] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:32.766 [2024-06-09 08:54:55.287831] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:32.766 true 00:16:32.766 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:32.766 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:33.027 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:33.027 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:33.288 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6ac65b61-fc43-4fac-a871-c99a20d04a44 00:16:33.288 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:33.548 [2024-06-09 08:54:55.901644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.548 08:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2539613 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2539613 /var/tmp/bdevperf.sock 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2539613 ']' 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:33.548 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:33.809 [2024-06-09 08:54:56.112696] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:33.809 [2024-06-09 08:54:56.112744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539613 ] 00:16:33.809 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.809 [2024-06-09 08:54:56.187615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.809 [2024-06-09 08:54:56.241023] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.381 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:34.381 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:34.381 08:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:34.953 Nvme0n1 00:16:34.953 08:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:34.953 [ 00:16:34.953 { 00:16:34.953 "name": "Nvme0n1", 00:16:34.953 "aliases": [ 00:16:34.953 "6ac65b61-fc43-4fac-a871-c99a20d04a44" 00:16:34.953 ], 00:16:34.953 "product_name": "NVMe disk", 00:16:34.953 "block_size": 4096, 00:16:34.953 "num_blocks": 38912, 00:16:34.953 "uuid": "6ac65b61-fc43-4fac-a871-c99a20d04a44", 00:16:34.953 "assigned_rate_limits": { 00:16:34.953 "rw_ios_per_sec": 0, 00:16:34.953 "rw_mbytes_per_sec": 0, 00:16:34.953 "r_mbytes_per_sec": 0, 00:16:34.953 "w_mbytes_per_sec": 0 00:16:34.953 }, 00:16:34.953 "claimed": false, 00:16:34.953 "zoned": false, 00:16:34.953 "supported_io_types": { 00:16:34.953 "read": true, 00:16:34.953 "write": true, 00:16:34.953 "unmap": true, 00:16:34.953 "write_zeroes": true, 00:16:34.953 "flush": true, 00:16:34.953 "reset": true, 00:16:34.953 "compare": true, 00:16:34.953 "compare_and_write": true, 00:16:34.953 "abort": true, 00:16:34.953 "nvme_admin": true, 00:16:34.953 "nvme_io": true 00:16:34.953 }, 00:16:34.953 "memory_domains": [ 00:16:34.953 { 00:16:34.953 "dma_device_id": "system", 00:16:34.953 "dma_device_type": 1 00:16:34.953 } 00:16:34.953 ], 00:16:34.953 "driver_specific": { 00:16:34.953 "nvme": [ 00:16:34.953 { 00:16:34.953 "trid": { 00:16:34.953 "trtype": "TCP", 00:16:34.953 "adrfam": "IPv4", 00:16:34.953 "traddr": "10.0.0.2", 00:16:34.953 "trsvcid": "4420", 00:16:34.953 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:34.953 }, 00:16:34.953 "ctrlr_data": { 00:16:34.953 "cntlid": 1, 00:16:34.953 "vendor_id": "0x8086", 00:16:34.953 "model_number": "SPDK bdev Controller", 00:16:34.953 "serial_number": "SPDK0", 00:16:34.953 "firmware_revision": "24.09", 00:16:34.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:34.953 "oacs": { 00:16:34.953 "security": 0, 00:16:34.953 "format": 0, 00:16:34.953 "firmware": 0, 00:16:34.953 "ns_manage": 0 00:16:34.953 }, 00:16:34.953 "multi_ctrlr": true, 00:16:34.953 "ana_reporting": false 00:16:34.953 }, 00:16:34.953 "vs": { 00:16:34.953 "nvme_version": "1.3" 00:16:34.953 }, 00:16:34.953 "ns_data": { 00:16:34.953 "id": 1, 00:16:34.953 "can_share": true 00:16:34.953 } 00:16:34.953 } 00:16:34.953 ], 00:16:34.953 "mp_policy": "active_passive" 00:16:34.953 } 00:16:34.953 } 00:16:34.953 ] 00:16:34.953 08:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2539952 00:16:34.953 08:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:34.953 08:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:34.953 Running I/O for 10 seconds... 00:16:36.338 Latency(us) 00:16:36.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.338 Nvme0n1 : 1.00 17525.00 68.46 0.00 0.00 0.00 0.00 0.00 00:16:36.338 =================================================================================================================== 00:16:36.338 Total : 17525.00 68.46 0.00 0.00 0.00 0.00 0.00 00:16:36.338 00:16:36.908 08:54:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:36.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.908 Nvme0n1 : 2.00 17670.50 69.03 0.00 0.00 0.00 0.00 0.00 00:16:36.908 =================================================================================================================== 00:16:36.908 Total : 17670.50 69.03 0.00 0.00 0.00 0.00 0.00 00:16:36.908 00:16:37.169 true 00:16:37.169 08:54:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:37.169 08:54:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:37.169 08:54:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:37.169 08:54:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:37.169 08:54:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2539952 00:16:38.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.112 Nvme0n1 : 3.00 17713.67 69.19 0.00 0.00 0.00 0.00 0.00 00:16:38.112 =================================================================================================================== 00:16:38.112 Total : 17713.67 69.19 0.00 0.00 0.00 0.00 0.00 00:16:38.112 00:16:39.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.053 Nvme0n1 : 4.00 17759.25 69.37 0.00 0.00 0.00 0.00 0.00 00:16:39.053 =================================================================================================================== 00:16:39.053 Total : 17759.25 69.37 0.00 0.00 0.00 0.00 0.00 00:16:39.053 00:16:39.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.995 Nvme0n1 : 5.00 17793.00 69.50 0.00 0.00 0.00 0.00 0.00 00:16:39.995 =================================================================================================================== 00:16:39.995 Total : 17793.00 69.50 0.00 0.00 0.00 0.00 0.00 00:16:39.995 00:16:40.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.937 Nvme0n1 : 6.00 17819.50 69.61 0.00 0.00 0.00 0.00 0.00 00:16:40.937 =================================================================================================================== 00:16:40.937 Total : 17819.50 69.61 0.00 0.00 0.00 0.00 0.00 00:16:40.937 00:16:42.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.323 Nvme0n1 : 7.00 17838.43 69.68 0.00 0.00 0.00 0.00 0.00 00:16:42.323 =================================================================================================================== 00:16:42.323 Total : 17838.43 69.68 0.00 0.00 0.00 0.00 0.00 00:16:42.323 00:16:43.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.265 Nvme0n1 : 8.00 17855.62 69.75 0.00 0.00 0.00 0.00 0.00 00:16:43.265 =================================================================================================================== 00:16:43.265 Total : 17855.62 69.75 0.00 0.00 0.00 0.00 0.00 00:16:43.265 00:16:44.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.207 Nvme0n1 : 9.00 17869.00 69.80 0.00 0.00 0.00 0.00 0.00 00:16:44.207 =================================================================================================================== 00:16:44.207 Total : 17869.00 69.80 0.00 0.00 0.00 0.00 0.00 00:16:44.207 00:16:45.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.151 Nvme0n1 : 10.00 17881.30 69.85 0.00 0.00 0.00 0.00 0.00 00:16:45.151 =================================================================================================================== 00:16:45.151 Total : 17881.30 69.85 0.00 0.00 0.00 0.00 0.00 00:16:45.151 00:16:45.151 00:16:45.151 Latency(us) 00:16:45.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.151 Nvme0n1 : 10.01 17881.18 69.85 0.00 0.00 7153.57 5816.32 19114.67 00:16:45.151 =================================================================================================================== 00:16:45.151 Total : 17881.18 69.85 0.00 0.00 7153.57 5816.32 19114.67 00:16:45.151 0 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2539613 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 2539613 ']' 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 2539613 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2539613 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2539613' 00:16:45.151 killing process with pid 2539613 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 2539613 00:16:45.151 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.151 00:16:45.151 Latency(us) 00:16:45.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.151 =================================================================================================================== 00:16:45.151 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 2539613 00:16:45.151 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:45.412 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:45.672 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:45.672 08:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2535906 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2535906 00:16:45.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2535906 Killed "${NVMF_APP[@]}" "$@" 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2541971 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2541971 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2541971 ']' 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:45.672 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:45.672 [2024-06-09 08:55:08.217223] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:45.672 [2024-06-09 08:55:08.217273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.933 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.933 [2024-06-09 08:55:08.281791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.933 [2024-06-09 08:55:08.347590] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.933 [2024-06-09 08:55:08.347624] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.933 [2024-06-09 08:55:08.347632] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.933 [2024-06-09 08:55:08.347638] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.933 [2024-06-09 08:55:08.347644] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.933 [2024-06-09 08:55:08.347661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.505 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:46.505 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:46.505 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.505 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:46.505 08:55:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:46.505 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.505 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:46.765 [2024-06-09 08:55:09.156487] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:46.765 [2024-06-09 08:55:09.156576] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:46.765 [2024-06-09 08:55:09.156605] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:46.765 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:46.765 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6ac65b61-fc43-4fac-a871-c99a20d04a44 00:16:46.765 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=6ac65b61-fc43-4fac-a871-c99a20d04a44 00:16:46.765 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:46.766 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:16:46.766 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:46.766 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:46.766 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:47.026 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6ac65b61-fc43-4fac-a871-c99a20d04a44 -t 2000 00:16:47.026 [ 00:16:47.026 { 00:16:47.026 "name": "6ac65b61-fc43-4fac-a871-c99a20d04a44", 00:16:47.026 "aliases": [ 00:16:47.026 "lvs/lvol" 00:16:47.026 ], 00:16:47.026 "product_name": "Logical Volume", 00:16:47.026 "block_size": 4096, 00:16:47.026 "num_blocks": 38912, 00:16:47.026 "uuid": "6ac65b61-fc43-4fac-a871-c99a20d04a44", 00:16:47.026 "assigned_rate_limits": { 00:16:47.026 "rw_ios_per_sec": 0, 00:16:47.026 "rw_mbytes_per_sec": 0, 00:16:47.026 "r_mbytes_per_sec": 0, 00:16:47.026 "w_mbytes_per_sec": 0 00:16:47.026 }, 00:16:47.026 "claimed": false, 00:16:47.026 "zoned": false, 00:16:47.026 "supported_io_types": { 00:16:47.026 "read": true, 00:16:47.026 "write": true, 00:16:47.026 "unmap": true, 00:16:47.027 "write_zeroes": true, 00:16:47.027 "flush": false, 00:16:47.027 "reset": true, 00:16:47.027 "compare": false, 00:16:47.027 "compare_and_write": false, 00:16:47.027 "abort": false, 00:16:47.027 "nvme_admin": false, 00:16:47.027 "nvme_io": false 00:16:47.027 }, 00:16:47.027 "driver_specific": { 00:16:47.027 "lvol": { 00:16:47.027 "lvol_store_uuid": "b37f9e66-fd82-4820-8058-2ca23a6aa443", 00:16:47.027 "base_bdev": "aio_bdev", 00:16:47.027 "thin_provision": false, 00:16:47.027 "num_allocated_clusters": 38, 00:16:47.027 "snapshot": false, 00:16:47.027 "clone": false, 00:16:47.027 "esnap_clone": false 00:16:47.027 } 00:16:47.027 } 00:16:47.027 } 00:16:47.027 ] 00:16:47.027 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:16:47.027 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:47.027 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:47.288 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:47.288 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:47.288 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:47.288 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:47.288 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:47.549 [2024-06-09 08:55:09.912367] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:47.549 08:55:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:47.549 request: 00:16:47.549 { 00:16:47.549 "uuid": "b37f9e66-fd82-4820-8058-2ca23a6aa443", 00:16:47.549 "method": "bdev_lvol_get_lvstores", 00:16:47.549 "req_id": 1 00:16:47.549 } 00:16:47.549 Got JSON-RPC error response 00:16:47.549 response: 00:16:47.549 { 00:16:47.549 "code": -19, 00:16:47.549 "message": "No such device" 00:16:47.549 } 00:16:47.549 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:16:47.549 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:47.549 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:47.549 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:47.549 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:47.838 aio_bdev 00:16:47.838 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6ac65b61-fc43-4fac-a871-c99a20d04a44 00:16:47.838 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=6ac65b61-fc43-4fac-a871-c99a20d04a44 00:16:47.838 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:47.838 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:16:47.838 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:47.838 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:47.838 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:48.130 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6ac65b61-fc43-4fac-a871-c99a20d04a44 -t 2000 00:16:48.130 [ 00:16:48.130 { 00:16:48.130 "name": "6ac65b61-fc43-4fac-a871-c99a20d04a44", 00:16:48.130 "aliases": [ 00:16:48.130 "lvs/lvol" 00:16:48.130 ], 00:16:48.130 "product_name": "Logical Volume", 00:16:48.130 "block_size": 4096, 00:16:48.130 "num_blocks": 38912, 00:16:48.130 "uuid": "6ac65b61-fc43-4fac-a871-c99a20d04a44", 00:16:48.130 "assigned_rate_limits": { 00:16:48.130 "rw_ios_per_sec": 0, 00:16:48.130 "rw_mbytes_per_sec": 0, 00:16:48.130 "r_mbytes_per_sec": 0, 00:16:48.130 "w_mbytes_per_sec": 0 00:16:48.130 }, 00:16:48.130 "claimed": false, 00:16:48.130 "zoned": false, 00:16:48.130 "supported_io_types": { 00:16:48.130 "read": true, 00:16:48.130 "write": true, 00:16:48.130 "unmap": true, 00:16:48.130 "write_zeroes": true, 00:16:48.130 "flush": false, 00:16:48.130 "reset": true, 00:16:48.130 "compare": false, 00:16:48.130 "compare_and_write": false, 00:16:48.130 "abort": false, 00:16:48.130 "nvme_admin": false, 00:16:48.130 "nvme_io": false 00:16:48.130 }, 00:16:48.130 "driver_specific": { 00:16:48.130 "lvol": { 00:16:48.130 "lvol_store_uuid": "b37f9e66-fd82-4820-8058-2ca23a6aa443", 00:16:48.130 "base_bdev": "aio_bdev", 00:16:48.130 "thin_provision": false, 00:16:48.130 "num_allocated_clusters": 38, 00:16:48.130 "snapshot": false, 00:16:48.130 "clone": false, 00:16:48.130 "esnap_clone": false 00:16:48.130 } 00:16:48.130 } 00:16:48.130 } 00:16:48.130 ] 00:16:48.130 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:16:48.130 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:48.130 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:48.392 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:48.392 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:48.392 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:48.392 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:48.392 08:55:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6ac65b61-fc43-4fac-a871-c99a20d04a44 00:16:48.653 08:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b37f9e66-fd82-4820-8058-2ca23a6aa443 00:16:48.653 08:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:48.914 00:16:48.914 real 0m16.854s 00:16:48.914 user 0m44.389s 00:16:48.914 sys 0m2.892s 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:48.914 ************************************ 00:16:48.914 END TEST lvs_grow_dirty 00:16:48.914 ************************************ 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:48.914 nvmf_trace.0 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.914 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.175 rmmod nvme_tcp 00:16:49.175 rmmod nvme_fabrics 00:16:49.175 rmmod nvme_keyring 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2541971 ']' 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2541971 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 2541971 ']' 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 2541971 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2541971 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2541971' 00:16:49.175 killing process with pid 2541971 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 2541971 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 2541971 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.175 08:55:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.726 08:55:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:51.726 00:16:51.726 real 0m42.978s 00:16:51.726 user 1m5.345s 00:16:51.726 sys 0m9.779s 00:16:51.726 08:55:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:51.726 08:55:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:51.726 ************************************ 00:16:51.726 END TEST nvmf_lvs_grow 00:16:51.726 ************************************ 00:16:51.726 08:55:13 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:51.726 08:55:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:51.726 08:55:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:51.726 08:55:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.726 ************************************ 00:16:51.726 START TEST nvmf_bdev_io_wait 00:16:51.726 ************************************ 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:51.726 * Looking for test storage... 00:16:51.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.726 08:55:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.726 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.726 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.726 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.726 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.726 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.726 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.726 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.727 08:55:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:58.315 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:58.315 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:58.315 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:58.315 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.315 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:16:58.315 00:16:58.315 --- 10.0.0.2 ping statistics --- 00:16:58.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.316 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:16:58.316 00:16:58.316 --- 10.0.0.1 ping statistics --- 00:16:58.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.316 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2546735 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2546735 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 2546735 ']' 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:58.316 08:55:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.576 [2024-06-09 08:55:20.915552] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:58.576 [2024-06-09 08:55:20.915602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.576 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.576 [2024-06-09 08:55:20.983976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.576 [2024-06-09 08:55:21.053019] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.576 [2024-06-09 08:55:21.053054] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.576 [2024-06-09 08:55:21.053062] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.576 [2024-06-09 08:55:21.053069] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.576 [2024-06-09 08:55:21.053074] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.577 [2024-06-09 08:55:21.053204] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.577 [2024-06-09 08:55:21.053329] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.577 [2024-06-09 08:55:21.053482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.577 [2024-06-09 08:55:21.053483] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.148 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:59.148 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:16:59.148 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.148 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:59.148 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.409 [2024-06-09 08:55:21.797437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.409 Malloc0 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.409 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.410 [2024-06-09 08:55:21.864703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2547053 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2547055 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.410 { 00:16:59.410 "params": { 00:16:59.410 "name": "Nvme$subsystem", 00:16:59.410 "trtype": "$TEST_TRANSPORT", 00:16:59.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.410 "adrfam": "ipv4", 00:16:59.410 "trsvcid": "$NVMF_PORT", 00:16:59.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.410 "hdgst": ${hdgst:-false}, 00:16:59.410 "ddgst": ${ddgst:-false} 00:16:59.410 }, 00:16:59.410 "method": "bdev_nvme_attach_controller" 00:16:59.410 } 00:16:59.410 EOF 00:16:59.410 )") 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2547057 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2547060 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.410 { 00:16:59.410 "params": { 00:16:59.410 "name": "Nvme$subsystem", 00:16:59.410 "trtype": "$TEST_TRANSPORT", 00:16:59.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.410 "adrfam": "ipv4", 00:16:59.410 "trsvcid": "$NVMF_PORT", 00:16:59.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.410 "hdgst": ${hdgst:-false}, 00:16:59.410 "ddgst": ${ddgst:-false} 00:16:59.410 }, 00:16:59.410 "method": "bdev_nvme_attach_controller" 00:16:59.410 } 00:16:59.410 EOF 00:16:59.410 )") 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.410 { 00:16:59.410 "params": { 00:16:59.410 "name": "Nvme$subsystem", 00:16:59.410 "trtype": "$TEST_TRANSPORT", 00:16:59.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.410 "adrfam": "ipv4", 00:16:59.410 "trsvcid": "$NVMF_PORT", 00:16:59.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.410 "hdgst": ${hdgst:-false}, 00:16:59.410 "ddgst": ${ddgst:-false} 00:16:59.410 }, 00:16:59.410 "method": "bdev_nvme_attach_controller" 00:16:59.410 } 00:16:59.410 EOF 00:16:59.410 )") 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.410 { 00:16:59.410 "params": { 00:16:59.410 "name": "Nvme$subsystem", 00:16:59.410 "trtype": "$TEST_TRANSPORT", 00:16:59.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.410 "adrfam": "ipv4", 00:16:59.410 "trsvcid": "$NVMF_PORT", 00:16:59.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.410 "hdgst": ${hdgst:-false}, 00:16:59.410 "ddgst": ${ddgst:-false} 00:16:59.410 }, 00:16:59.410 "method": "bdev_nvme_attach_controller" 00:16:59.410 } 00:16:59.410 EOF 00:16:59.410 )") 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2547053 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.410 "params": { 00:16:59.410 "name": "Nvme1", 00:16:59.410 "trtype": "tcp", 00:16:59.410 "traddr": "10.0.0.2", 00:16:59.410 "adrfam": "ipv4", 00:16:59.410 "trsvcid": "4420", 00:16:59.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.410 "hdgst": false, 00:16:59.410 "ddgst": false 00:16:59.410 }, 00:16:59.410 "method": "bdev_nvme_attach_controller" 00:16:59.410 }' 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.410 "params": { 00:16:59.410 "name": "Nvme1", 00:16:59.410 "trtype": "tcp", 00:16:59.410 "traddr": "10.0.0.2", 00:16:59.410 "adrfam": "ipv4", 00:16:59.410 "trsvcid": "4420", 00:16:59.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.410 "hdgst": false, 00:16:59.410 "ddgst": false 00:16:59.410 }, 00:16:59.410 "method": "bdev_nvme_attach_controller" 00:16:59.410 }' 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.410 "params": { 00:16:59.410 "name": "Nvme1", 00:16:59.410 "trtype": "tcp", 00:16:59.410 "traddr": "10.0.0.2", 00:16:59.410 "adrfam": "ipv4", 00:16:59.410 "trsvcid": "4420", 00:16:59.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.410 "hdgst": false, 00:16:59.410 "ddgst": false 00:16:59.410 }, 00:16:59.410 "method": "bdev_nvme_attach_controller" 00:16:59.410 }' 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:59.410 08:55:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.410 "params": { 00:16:59.410 "name": "Nvme1", 00:16:59.410 "trtype": "tcp", 00:16:59.410 "traddr": "10.0.0.2", 00:16:59.410 "adrfam": "ipv4", 00:16:59.410 "trsvcid": "4420", 00:16:59.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.410 "hdgst": false, 00:16:59.410 "ddgst": false 00:16:59.410 }, 00:16:59.410 "method": "bdev_nvme_attach_controller" 00:16:59.410 }' 00:16:59.410 [2024-06-09 08:55:21.917645] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:59.410 [2024-06-09 08:55:21.917698] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:59.410 [2024-06-09 08:55:21.919247] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:59.410 [2024-06-09 08:55:21.919248] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:59.410 [2024-06-09 08:55:21.919293] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-06-09 08:55:21.919294] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:59.410 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:59.410 [2024-06-09 08:55:21.919750] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:59.410 [2024-06-09 08:55:21.919791] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:59.410 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.671 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.671 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.671 [2024-06-09 08:55:22.063886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.671 [2024-06-09 08:55:22.107478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.671 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.671 [2024-06-09 08:55:22.116051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:16:59.671 [2024-06-09 08:55:22.154078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.671 [2024-06-09 08:55:22.157326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:16:59.671 [2024-06-09 08:55:22.203917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.671 [2024-06-09 08:55:22.204045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:16:59.932 [2024-06-09 08:55:22.254480] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:16:59.932 Running I/O for 1 seconds... 00:16:59.932 Running I/O for 1 seconds... 00:16:59.932 Running I/O for 1 seconds... 00:16:59.932 Running I/O for 1 seconds... 00:17:00.876 00:17:00.876 Latency(us) 00:17:00.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.876 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:00.876 Nvme1n1 : 1.01 11779.41 46.01 0.00 0.00 10804.29 4450.99 15837.87 00:17:00.876 =================================================================================================================== 00:17:00.876 Total : 11779.41 46.01 0.00 0.00 10804.29 4450.99 15837.87 00:17:00.876 00:17:00.876 Latency(us) 00:17:00.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.876 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:00.876 Nvme1n1 : 1.01 12668.73 49.49 0.00 0.00 10067.61 4969.81 24139.09 00:17:00.876 =================================================================================================================== 00:17:00.876 Total : 12668.73 49.49 0.00 0.00 10067.61 4969.81 24139.09 00:17:00.876 00:17:00.876 Latency(us) 00:17:00.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.876 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:00.876 Nvme1n1 : 1.00 11665.29 45.57 0.00 0.00 10949.15 3044.69 23811.41 00:17:00.876 =================================================================================================================== 00:17:00.876 Total : 11665.29 45.57 0.00 0.00 10949.15 3044.69 23811.41 00:17:01.136 00:17:01.136 Latency(us) 00:17:01.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.136 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:01.136 Nvme1n1 : 1.00 78212.94 305.52 0.00 0.00 1629.72 279.89 7482.03 00:17:01.136 =================================================================================================================== 00:17:01.136 Total : 78212.94 305.52 0.00 0.00 1629.72 279.89 7482.03 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2547055 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2547057 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2547060 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.136 rmmod nvme_tcp 00:17:01.136 rmmod nvme_fabrics 00:17:01.136 rmmod nvme_keyring 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2546735 ']' 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2546735 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 2546735 ']' 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 2546735 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:01.136 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2546735 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2546735' 00:17:01.396 killing process with pid 2546735 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 2546735 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 2546735 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.396 08:55:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.941 08:55:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.941 00:17:03.941 real 0m12.044s 00:17:03.941 user 0m17.713s 00:17:03.941 sys 0m6.469s 00:17:03.941 08:55:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:03.941 08:55:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.941 ************************************ 00:17:03.941 END TEST nvmf_bdev_io_wait 00:17:03.941 ************************************ 00:17:03.941 08:55:25 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:03.941 08:55:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:03.941 08:55:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:03.941 08:55:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:03.941 ************************************ 00:17:03.941 START TEST nvmf_queue_depth 00:17:03.941 ************************************ 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:03.941 * Looking for test storage... 00:17:03.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.941 08:55:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:10.536 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:10.536 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.536 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:10.536 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:10.537 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.537 08:55:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.537 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.537 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.537 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:17:10.537 00:17:10.537 --- 10.0.0.2 ping statistics --- 00:17:10.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.537 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:17:10.537 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:17:10.798 00:17:10.798 --- 10.0.0.1 ping statistics --- 00:17:10.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.798 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2551486 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2551486 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2551486 ']' 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:10.798 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.798 [2024-06-09 08:55:33.180944] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:10.798 [2024-06-09 08:55:33.180990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.798 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.798 [2024-06-09 08:55:33.261956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.798 [2024-06-09 08:55:33.325908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.798 [2024-06-09 08:55:33.325942] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.798 [2024-06-09 08:55:33.325950] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.798 [2024-06-09 08:55:33.325956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.798 [2024-06-09 08:55:33.325961] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.798 [2024-06-09 08:55:33.325978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.741 08:55:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.741 [2024-06-09 08:55:34.004211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.741 Malloc0 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.741 [2024-06-09 08:55:34.071481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2551753 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2551753 /var/tmp/bdevperf.sock 00:17:11.741 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2551753 ']' 00:17:11.742 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.742 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:11.742 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.742 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:11.742 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.742 [2024-06-09 08:55:34.124285] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:11.742 [2024-06-09 08:55:34.124347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551753 ] 00:17:11.742 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.742 [2024-06-09 08:55:34.188208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.742 [2024-06-09 08:55:34.262399] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.379 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:12.379 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:17:12.379 08:55:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:12.379 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.379 08:55:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.640 NVMe0n1 00:17:12.640 08:55:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.640 08:55:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.901 Running I/O for 10 seconds... 00:17:22.904 00:17:22.904 Latency(us) 00:17:22.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.904 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:22.904 Verification LBA range: start 0x0 length 0x4000 00:17:22.904 NVMe0n1 : 10.09 11261.43 43.99 0.00 0.00 90284.79 25122.13 75147.95 00:17:22.904 =================================================================================================================== 00:17:22.904 Total : 11261.43 43.99 0.00 0.00 90284.79 25122.13 75147.95 00:17:22.904 0 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2551753 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2551753 ']' 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2551753 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2551753 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2551753' 00:17:22.904 killing process with pid 2551753 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2551753 00:17:22.904 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.904 00:17:22.904 Latency(us) 00:17:22.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.904 =================================================================================================================== 00:17:22.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.904 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2551753 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.164 rmmod nvme_tcp 00:17:23.164 rmmod nvme_fabrics 00:17:23.164 rmmod nvme_keyring 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2551486 ']' 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2551486 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2551486 ']' 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2551486 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2551486 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2551486' 00:17:23.164 killing process with pid 2551486 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2551486 00:17:23.164 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2551486 00:17:23.425 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.425 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.425 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.425 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.425 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.425 08:55:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.425 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.425 08:55:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.336 08:55:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.336 00:17:25.336 real 0m21.838s 00:17:25.336 user 0m25.658s 00:17:25.336 sys 0m6.375s 00:17:25.336 08:55:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:25.336 08:55:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:25.336 ************************************ 00:17:25.336 END TEST nvmf_queue_depth 00:17:25.336 ************************************ 00:17:25.336 08:55:47 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:25.336 08:55:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:25.336 08:55:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:25.336 08:55:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.598 ************************************ 00:17:25.598 START TEST nvmf_target_multipath 00:17:25.598 ************************************ 00:17:25.598 08:55:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:25.598 * Looking for test storage... 00:17:25.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.598 08:55:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:32.189 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:32.189 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:32.189 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.189 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:32.189 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:32.190 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:32.451 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.451 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.451 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.451 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.451 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:32.451 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.451 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.451 08:55:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.711 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:32.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:17:32.711 00:17:32.711 --- 10.0.0.2 ping statistics --- 00:17:32.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.711 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:17:32.711 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:17:32.711 00:17:32.712 --- 10.0.0.1 ping statistics --- 00:17:32.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.712 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:32.712 only one NIC for nvmf test 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.712 rmmod nvme_tcp 00:17:32.712 rmmod nvme_fabrics 00:17:32.712 rmmod nvme_keyring 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.712 08:55:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.260 00:17:35.260 real 0m9.324s 00:17:35.260 user 0m2.071s 00:17:35.260 sys 0m5.182s 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:35.260 08:55:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:35.260 ************************************ 00:17:35.260 END TEST nvmf_target_multipath 00:17:35.260 ************************************ 00:17:35.260 08:55:57 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:35.260 08:55:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:35.260 08:55:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:35.260 08:55:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.260 ************************************ 00:17:35.260 START TEST nvmf_zcopy 00:17:35.260 ************************************ 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:35.260 * Looking for test storage... 00:17:35.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.260 08:55:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.261 08:55:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.851 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:41.852 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:41.852 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:41.852 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:41.852 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:41.852 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.113 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.113 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.113 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:17:42.113 00:17:42.113 --- 10.0.0.2 ping statistics --- 00:17:42.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.113 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:17:42.113 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:17:42.113 00:17:42.113 --- 10.0.0.1 ping statistics --- 00:17:42.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.113 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:17:42.113 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.113 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2562389 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2562389 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 2562389 ']' 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:42.114 08:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.114 [2024-06-09 08:56:04.611497] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:42.114 [2024-06-09 08:56:04.611561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.114 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.375 [2024-06-09 08:56:04.699057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.375 [2024-06-09 08:56:04.791095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.376 [2024-06-09 08:56:04.791153] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.376 [2024-06-09 08:56:04.791162] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.376 [2024-06-09 08:56:04.791169] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.376 [2024-06-09 08:56:04.791175] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.376 [2024-06-09 08:56:04.791200] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.984 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:42.984 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:17:42.984 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.985 [2024-06-09 08:56:05.443269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.985 [2024-06-09 08:56:05.467508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.985 malloc0 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:42.985 { 00:17:42.985 "params": { 00:17:42.985 "name": "Nvme$subsystem", 00:17:42.985 "trtype": "$TEST_TRANSPORT", 00:17:42.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.985 "adrfam": "ipv4", 00:17:42.985 "trsvcid": "$NVMF_PORT", 00:17:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.985 "hdgst": ${hdgst:-false}, 00:17:42.985 "ddgst": ${ddgst:-false} 00:17:42.985 }, 00:17:42.985 "method": "bdev_nvme_attach_controller" 00:17:42.985 } 00:17:42.985 EOF 00:17:42.985 )") 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:42.985 08:56:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:42.985 "params": { 00:17:42.985 "name": "Nvme1", 00:17:42.985 "trtype": "tcp", 00:17:42.985 "traddr": "10.0.0.2", 00:17:42.985 "adrfam": "ipv4", 00:17:42.985 "trsvcid": "4420", 00:17:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.985 "hdgst": false, 00:17:42.985 "ddgst": false 00:17:42.985 }, 00:17:42.985 "method": "bdev_nvme_attach_controller" 00:17:42.985 }' 00:17:43.246 [2024-06-09 08:56:05.563255] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:43.246 [2024-06-09 08:56:05.563314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562431 ] 00:17:43.246 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.246 [2024-06-09 08:56:05.627299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.246 [2024-06-09 08:56:05.702775] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.507 Running I/O for 10 seconds... 00:17:53.510 00:17:53.510 Latency(us) 00:17:53.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.510 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:53.510 Verification LBA range: start 0x0 length 0x1000 00:17:53.510 Nvme1n1 : 10.01 9214.80 71.99 0.00 0.00 13838.91 1481.39 40195.41 00:17:53.510 =================================================================================================================== 00:17:53.510 Total : 9214.80 71.99 0.00 0.00 13838.91 1481.39 40195.41 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2564541 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:53.510 08:56:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:53.510 { 00:17:53.510 "params": { 00:17:53.510 "name": "Nvme$subsystem", 00:17:53.510 "trtype": "$TEST_TRANSPORT", 00:17:53.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.510 "adrfam": "ipv4", 00:17:53.510 "trsvcid": "$NVMF_PORT", 00:17:53.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.510 "hdgst": ${hdgst:-false}, 00:17:53.510 "ddgst": ${ddgst:-false} 00:17:53.510 }, 00:17:53.511 "method": "bdev_nvme_attach_controller" 00:17:53.511 } 00:17:53.511 EOF 00:17:53.511 )") 00:17:53.511 08:56:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:53.511 [2024-06-09 08:56:16.020857] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.511 [2024-06-09 08:56:16.020885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.511 08:56:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:53.511 08:56:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:53.511 08:56:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:53.511 "params": { 00:17:53.511 "name": "Nvme1", 00:17:53.511 "trtype": "tcp", 00:17:53.511 "traddr": "10.0.0.2", 00:17:53.511 "adrfam": "ipv4", 00:17:53.511 "trsvcid": "4420", 00:17:53.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.511 "hdgst": false, 00:17:53.511 "ddgst": false 00:17:53.511 }, 00:17:53.511 "method": "bdev_nvme_attach_controller" 00:17:53.511 }' 00:17:53.511 [2024-06-09 08:56:16.032861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.511 [2024-06-09 08:56:16.032869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.511 [2024-06-09 08:56:16.044891] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.511 [2024-06-09 08:56:16.044899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.511 [2024-06-09 08:56:16.056919] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.511 [2024-06-09 08:56:16.056926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.511 [2024-06-09 08:56:16.059499] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:53.511 [2024-06-09 08:56:16.059543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564541 ] 00:17:53.511 [2024-06-09 08:56:16.068950] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.772 [2024-06-09 08:56:16.068958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.772 [2024-06-09 08:56:16.080981] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.080988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.773 [2024-06-09 08:56:16.093011] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.093019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.105044] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.105051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.116996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.773 [2024-06-09 08:56:16.117076] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.117086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.129107] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.129116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.141136] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.141144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.153194] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.153206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.165224] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.165234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.177254] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.177263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.181759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.773 [2024-06-09 08:56:16.189285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.189293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.201319] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.201333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.213348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.213356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.225378] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.225387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.237412] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.237420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.249444] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.249453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.261482] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.261497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.273511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.273519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.285544] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.285554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.297576] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.297585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.309609] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.309616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.773 [2024-06-09 08:56:16.321643] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.773 [2024-06-09 08:56:16.321650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.333676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.333687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.345707] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.345716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.357736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.357743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.369767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.369774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.381799] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.381807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.393829] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.393838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.405859] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.405866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.417891] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.417898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.429923] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.429931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.441961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.441975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 Running I/O for 5 seconds... 00:17:54.034 [2024-06-09 08:56:16.453987] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.453994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.470983] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.470998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.485116] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.485132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.498514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.498531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.511768] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.511783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.524024] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.524039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.034 [2024-06-09 08:56:16.536974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.034 [2024-06-09 08:56:16.536989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.035 [2024-06-09 08:56:16.550016] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.035 [2024-06-09 08:56:16.550032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.035 [2024-06-09 08:56:16.563199] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.035 [2024-06-09 08:56:16.563214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.035 [2024-06-09 08:56:16.576139] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.035 [2024-06-09 08:56:16.576158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.035 [2024-06-09 08:56:16.589337] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.035 [2024-06-09 08:56:16.589352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.602789] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.602805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.615591] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.615606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.628908] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.628923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.642315] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.642330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.655713] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.655728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.669203] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.669218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.682512] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.682527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.695457] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.695472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.708946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.708961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.722132] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.722146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.735061] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.735076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.748610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.748625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.761459] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.761474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.774725] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.774740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.788141] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.788156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.801411] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.801427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.814265] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.814279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.827361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.827382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.840588] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.840603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.296 [2024-06-09 08:56:16.853682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.296 [2024-06-09 08:56:16.853697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.866844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.866859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.880082] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.880097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.892776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.892791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.905945] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.905960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.919205] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.919219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.931844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.931858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.945238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.945252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.958748] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.958763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.972174] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.972189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.985361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.985375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:16.998087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:16.998101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.010887] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.010903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.023626] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.023641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.036649] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.036664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.050172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.050187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.062827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.062842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.075725] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.075740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.089025] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.089040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.102205] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.102220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.558 [2024-06-09 08:56:17.115464] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.558 [2024-06-09 08:56:17.115479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.128699] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.128714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.141688] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.141702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.155437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.155452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.168441] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.168456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.181449] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.181463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.194123] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.194137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.207316] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.207331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.219849] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.219864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.819 [2024-06-09 08:56:17.233471] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.819 [2024-06-09 08:56:17.233486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.246923] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.246938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.259804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.259818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.273054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.273069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.285668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.285683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.298479] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.298493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.312324] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.312339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.325737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.325751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.338951] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.338965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.351886] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.351901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.364534] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.364549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.820 [2024-06-09 08:56:17.377396] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.820 [2024-06-09 08:56:17.377416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.081 [2024-06-09 08:56:17.390246] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.081 [2024-06-09 08:56:17.390260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.081 [2024-06-09 08:56:17.403501] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.081 [2024-06-09 08:56:17.403515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.081 [2024-06-09 08:56:17.416376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.081 [2024-06-09 08:56:17.416390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.081 [2024-06-09 08:56:17.429738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.081 [2024-06-09 08:56:17.429752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.081 [2024-06-09 08:56:17.442692] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.081 [2024-06-09 08:56:17.442706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.081 [2024-06-09 08:56:17.455581] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.081 [2024-06-09 08:56:17.455596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.081 [2024-06-09 08:56:17.468400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.081 [2024-06-09 08:56:17.468419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.481485] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.481500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.493786] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.493802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.506668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.506683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.519520] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.519535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.532608] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.532624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.545579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.545594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.558714] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.558729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.571372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.571387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.584051] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.584066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.597206] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.597221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.610055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.610070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.623089] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.623104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.082 [2024-06-09 08:56:17.636469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.082 [2024-06-09 08:56:17.636484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.649728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.649743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.662662] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.662677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.676014] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.676029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.689427] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.689442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.702546] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.702561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.715640] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.715654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.728933] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.728947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.742559] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.742574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.755708] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.755722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.768972] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.768987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.782384] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.782398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.795703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.795717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.808965] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.808980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.821710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.821726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.834346] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.834362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.847698] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.847714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.860701] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.860715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.873808] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.873823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.886534] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.886549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.344 [2024-06-09 08:56:17.899959] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.344 [2024-06-09 08:56:17.899974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:17.913087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:17.913102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:17.925676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:17.925691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:17.938277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:17.938291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:17.950454] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:17.950469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:17.963669] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:17.963684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:17.976533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:17.976548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:17.989479] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:17.989494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.002470] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.002485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.014834] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.014848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.028178] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.028192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.041744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.041759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.054149] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.054168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.066866] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.066881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.079860] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.079875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.092838] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.092852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.106283] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.106298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.119603] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.119618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.132806] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.132821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.145831] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.145845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.606 [2024-06-09 08:56:18.159028] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.606 [2024-06-09 08:56:18.159043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.172029] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.172044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.185154] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.185168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.198614] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.198629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.211233] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.211248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.224466] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.224481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.237519] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.237533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.249912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.249926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.263139] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.263154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.275596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.275611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.288985] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.288999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.301962] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.301981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.314760] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.314775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.327715] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.327730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.340807] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.340822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.353825] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.353839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.367295] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.367311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.380204] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.380219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.393195] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.393210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.406044] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.406058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.868 [2024-06-09 08:56:18.419397] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.868 [2024-06-09 08:56:18.419417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.432350] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.432365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.445573] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.445588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.458376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.458391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.471202] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.471217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.484534] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.484548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.497759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.497773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.511172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.511187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.524155] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.524170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.537279] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.537294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.550717] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.550735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.563639] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.563654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.576815] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.576830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.589943] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.589957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.603064] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.603079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.616627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.616641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.629726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.629740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.643089] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.643104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.656345] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.656360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.669509] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.669524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.130 [2024-06-09 08:56:18.682752] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.130 [2024-06-09 08:56:18.682767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.695344] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.695359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.708368] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.708383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.721472] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.721486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.734739] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.734754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.747890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.747904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.761033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.761048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.774597] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.774612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.787820] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.787835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.800984] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.801003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.814684] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.814699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.827612] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.827627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.840908] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.840924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.854250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.854265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.867563] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.867578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.880398] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.880418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.893065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.893080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.906706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.906720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.919702] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.919717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.932811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.932825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.392 [2024-06-09 08:56:18.945869] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.392 [2024-06-09 08:56:18.945883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:18.959181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:18.959196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:18.972637] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:18.972651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:18.985825] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:18.985839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:18.998761] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:18.998776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.011706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.011721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.024583] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.024598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.037579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.037593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.050531] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.050546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.063407] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.063422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.076472] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.076486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.089597] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.089611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.102801] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.102815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.116066] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.116080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.128657] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.128672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.141827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.141842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.154993] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.155008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.168303] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.168318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.181474] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.181490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.194506] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.194522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.654 [2024-06-09 08:56:19.207916] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.654 [2024-06-09 08:56:19.207932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.220942] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.220958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.234331] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.234347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.247653] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.247668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.260230] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.260245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.273018] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.273033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.286185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.286201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.299028] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.299043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.311398] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.311418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.324766] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.324782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.337499] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.337513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.350616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.350631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.364238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.364253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.376580] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.376594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.388985] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.389000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.401982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.401997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.414949] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.414963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.428149] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.428163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.440415] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.440430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.453336] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.453351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.916 [2024-06-09 08:56:19.465974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.916 [2024-06-09 08:56:19.465988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.479057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.479072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.491975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.491990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.505074] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.505089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.518605] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.518620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.531153] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.531168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.544138] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.544154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.556954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.556969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.570124] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.570139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.583369] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.583383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.596742] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.596757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.609793] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.609808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.622217] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.622232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.635917] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.635932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.649320] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.649335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.662398] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.662418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.675642] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.675657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.688954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.688969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.702100] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.702115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.715335] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.715350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.178 [2024-06-09 08:56:19.728391] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.178 [2024-06-09 08:56:19.728411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.439 [2024-06-09 08:56:19.741509] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.439 [2024-06-09 08:56:19.741524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.754801] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.754816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.767672] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.767686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.780423] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.780438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.793633] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.793648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.806590] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.806605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.819618] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.819633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.832423] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.832438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.845598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.845613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.858888] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.858902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.872196] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.872211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.885375] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.885389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.898392] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.898411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.910838] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.910853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.924324] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.924338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.937601] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.937615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.951084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.951098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.963070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.963085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.976663] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.976678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.440 [2024-06-09 08:56:19.990072] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.440 [2024-06-09 08:56:19.990086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.003736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.003751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.017238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.017255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.030145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.030166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.042663] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.042679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.055946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.055961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.068933] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.068948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.081824] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.081839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.094728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.094743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.107947] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.107961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.121131] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.121146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.133986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.134001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.147170] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.147184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.160352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.160366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.173350] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.173365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.186584] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.186600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.199627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.199642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.212950] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.212964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.226251] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.226266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.238710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.238725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.702 [2024-06-09 08:56:20.251755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.702 [2024-06-09 08:56:20.251770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.963 [2024-06-09 08:56:20.265115] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.963 [2024-06-09 08:56:20.265130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.963 [2024-06-09 08:56:20.278288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.963 [2024-06-09 08:56:20.278308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.963 [2024-06-09 08:56:20.291718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.963 [2024-06-09 08:56:20.291733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.963 [2024-06-09 08:56:20.304756] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.304771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.317791] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.317806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.330968] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.330982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.344338] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.344352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.357659] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.357673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.371094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.371109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.384070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.384084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.397064] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.397079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.409895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.409911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.423291] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.423306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.436618] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.436633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.450050] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.450066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.463285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.463300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.476208] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.476223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.489323] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.489337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.502889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.502904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.964 [2024-06-09 08:56:20.515611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.964 [2024-06-09 08:56:20.515626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.528603] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.528625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.541050] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.541065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.553697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.553712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.567256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.567270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.579788] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.579802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.592693] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.592707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.605246] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.605261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.617706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.617721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.631418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.631433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.644557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.644572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.657801] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.657816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.671297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.671312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.684440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.684455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.697350] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.697365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.710297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.710312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.723130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.723144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.735668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.735684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.748748] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.748763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.761941] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.761956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.226 [2024-06-09 08:56:20.774274] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.226 [2024-06-09 08:56:20.774292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.787116] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.487 [2024-06-09 08:56:20.787131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.799998] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.487 [2024-06-09 08:56:20.800013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.812685] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.487 [2024-06-09 08:56:20.812699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.825356] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.487 [2024-06-09 08:56:20.825371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.838436] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.487 [2024-06-09 08:56:20.838451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.850775] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.487 [2024-06-09 08:56:20.850791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.864145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.487 [2024-06-09 08:56:20.864160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.877611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.487 [2024-06-09 08:56:20.877627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.487 [2024-06-09 08:56:20.890757] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.890771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:20.903818] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.903832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:20.917198] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.917212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:20.929768] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.929783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:20.942898] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.942912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:20.956385] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.956400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:20.969952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.969967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:20.983427] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.983442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:20.996548] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:20.996563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:21.010085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:21.010100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:21.023178] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:21.023193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.488 [2024-06-09 08:56:21.035716] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.488 [2024-06-09 08:56:21.035731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.048709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.048724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.061991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.062005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.074993] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.075008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.087834] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.087849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.100459] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.100474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.113359] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.113374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.126426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.126441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.139740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.139754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.152535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.152550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.165257] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.165272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.178300] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.178315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.191783] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.749 [2024-06-09 08:56:21.191799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.749 [2024-06-09 08:56:21.204813] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.750 [2024-06-09 08:56:21.204828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.750 [2024-06-09 08:56:21.218200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.750 [2024-06-09 08:56:21.218216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.750 [2024-06-09 08:56:21.230895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.750 [2024-06-09 08:56:21.230909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.750 [2024-06-09 08:56:21.243909] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.750 [2024-06-09 08:56:21.243924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.750 [2024-06-09 08:56:21.256330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.750 [2024-06-09 08:56:21.256345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.750 [2024-06-09 08:56:21.269882] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.750 [2024-06-09 08:56:21.269897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.750 [2024-06-09 08:56:21.283190] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.750 [2024-06-09 08:56:21.283205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.750 [2024-06-09 08:56:21.295652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.750 [2024-06-09 08:56:21.295666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.308930] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.308945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.322303] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.322318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.335924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.335939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.349057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.349071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.362078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.362092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.374848] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.374862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.387221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.387236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.400783] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.400798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.413736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.413751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.427396] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.427418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.440674] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.440690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.453696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.453711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.466740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.466755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 00:17:59.011 Latency(us) 00:17:59.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.011 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:59.011 Nvme1n1 : 5.01 19588.63 153.04 0.00 0.00 6527.54 2635.09 27197.44 00:17:59.011 =================================================================================================================== 00:17:59.011 Total : 19588.63 153.04 0.00 0.00 6527.54 2635.09 27197.44 00:17:59.011 [2024-06-09 08:56:21.475992] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.476006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.488019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.488030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.500053] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.500065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.512084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.512096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.524115] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.524126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.536144] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.536154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.548172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.548180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.011 [2024-06-09 08:56:21.560204] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.011 [2024-06-09 08:56:21.560214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.272 [2024-06-09 08:56:21.572234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.272 [2024-06-09 08:56:21.572244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.272 [2024-06-09 08:56:21.584266] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.272 [2024-06-09 08:56:21.584276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.272 [2024-06-09 08:56:21.596295] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.272 [2024-06-09 08:56:21.596302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2564541) - No such process 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2564541 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.272 delay0 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.272 08:56:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:59.272 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.272 [2024-06-09 08:56:21.734644] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:05.857 Initializing NVMe Controllers 00:18:05.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.857 Initialization complete. Launching workers. 00:18:05.858 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:18:05.858 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:18:05.858 success 115, unsuccess 241, failed 0 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:05.858 rmmod nvme_tcp 00:18:05.858 rmmod nvme_fabrics 00:18:05.858 rmmod nvme_keyring 00:18:05.858 08:56:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2562389 ']' 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2562389 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 2562389 ']' 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 2562389 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2562389 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2562389' 00:18:05.858 killing process with pid 2562389 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 2562389 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 2562389 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.858 08:56:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.772 08:56:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:07.772 00:18:07.772 real 0m32.948s 00:18:07.772 user 0m44.753s 00:18:07.772 sys 0m9.901s 00:18:07.772 08:56:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:07.772 08:56:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.772 ************************************ 00:18:07.772 END TEST nvmf_zcopy 00:18:07.772 ************************************ 00:18:07.772 08:56:30 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:07.772 08:56:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:07.772 08:56:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:07.772 08:56:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:07.772 ************************************ 00:18:07.772 START TEST nvmf_nmic 00:18:07.772 ************************************ 00:18:07.772 08:56:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:08.072 * Looking for test storage... 00:18:08.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.072 08:56:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.072 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:08.072 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.072 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.072 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.073 08:56:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:16.223 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:16.223 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:16.223 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:16.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:18:16.223 00:18:16.223 --- 10.0.0.2 ping statistics --- 00:18:16.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.223 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:18:16.223 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:18:16.224 00:18:16.224 --- 10.0.0.1 ping statistics --- 00:18:16.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.224 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2571080 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2571080 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 2571080 ']' 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:16.224 08:56:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 [2024-06-09 08:56:37.665818] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:18:16.224 [2024-06-09 08:56:37.665884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.224 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.224 [2024-06-09 08:56:37.737844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.224 [2024-06-09 08:56:37.803947] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.224 [2024-06-09 08:56:37.803982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.224 [2024-06-09 08:56:37.803990] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.224 [2024-06-09 08:56:37.803997] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.224 [2024-06-09 08:56:37.804002] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.224 [2024-06-09 08:56:37.807421] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.224 [2024-06-09 08:56:37.807468] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.224 [2024-06-09 08:56:37.807624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.224 [2024-06-09 08:56:37.807625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 [2024-06-09 08:56:38.482957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 Malloc0 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 [2024-06-09 08:56:38.542441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:16.224 test case1: single bdev can't be used in multiple subsystems 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 [2024-06-09 08:56:38.578351] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:16.224 [2024-06-09 08:56:38.578369] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:16.224 [2024-06-09 08:56:38.578377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.224 request: 00:18:16.224 { 00:18:16.224 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:16.224 "namespace": { 00:18:16.224 "bdev_name": "Malloc0", 00:18:16.224 "no_auto_visible": false 00:18:16.224 }, 00:18:16.224 "method": "nvmf_subsystem_add_ns", 00:18:16.224 "req_id": 1 00:18:16.224 } 00:18:16.224 Got JSON-RPC error response 00:18:16.224 response: 00:18:16.224 { 00:18:16.224 "code": -32602, 00:18:16.224 "message": "Invalid parameters" 00:18:16.224 } 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:16.224 Adding namespace failed - expected result. 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:16.224 test case2: host connect to nvmf target in multiple paths 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 [2024-06-09 08:56:38.590494] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:16.224 08:56:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.225 08:56:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:17.611 08:56:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:19.527 08:56:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:19.527 08:56:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:18:19.527 08:56:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.527 08:56:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:19.527 08:56:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:18:21.483 08:56:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:21.483 08:56:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:21.483 08:56:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.483 08:56:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:21.483 08:56:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.483 08:56:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:18:21.483 08:56:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:21.483 [global] 00:18:21.483 thread=1 00:18:21.483 invalidate=1 00:18:21.483 rw=write 00:18:21.483 time_based=1 00:18:21.483 runtime=1 00:18:21.483 ioengine=libaio 00:18:21.483 direct=1 00:18:21.483 bs=4096 00:18:21.483 iodepth=1 00:18:21.483 norandommap=0 00:18:21.483 numjobs=1 00:18:21.483 00:18:21.483 verify_dump=1 00:18:21.483 verify_backlog=512 00:18:21.483 verify_state_save=0 00:18:21.483 do_verify=1 00:18:21.483 verify=crc32c-intel 00:18:21.483 [job0] 00:18:21.483 filename=/dev/nvme0n1 00:18:21.483 Could not set queue depth (nvme0n1) 00:18:21.743 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:21.743 fio-3.35 00:18:21.743 Starting 1 thread 00:18:23.129 00:18:23.129 job0: (groupid=0, jobs=1): err= 0: pid=2572563: Sun Jun 9 08:56:45 2024 00:18:23.129 read: IOPS=11, BW=46.2KiB/s (47.4kB/s)(48.0KiB/1038msec) 00:18:23.129 slat (nsec): min=24892, max=26431, avg=25534.58, stdev=484.66 00:18:23.129 clat (usec): min=41925, max=42325, avg=41990.89, stdev=107.56 00:18:23.129 lat (usec): min=41951, max=42350, avg=42016.42, stdev=107.50 00:18:23.129 clat percentiles (usec): 00:18:23.129 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:18:23.129 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:23.129 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:23.130 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:23.130 | 99.99th=[42206] 00:18:23.130 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:18:23.130 slat (usec): min=13, max=27760, avg=87.61, stdev=1225.37 00:18:23.130 clat (usec): min=702, max=1150, avg=947.47, stdev=59.38 00:18:23.130 lat (usec): min=735, max=28733, avg=1035.08, stdev=1227.93 00:18:23.130 clat percentiles (usec): 00:18:23.130 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 906], 00:18:23.130 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:18:23.130 | 70.00th=[ 979], 80.00th=[ 988], 90.00th=[ 996], 95.00th=[ 1012], 00:18:23.130 | 99.00th=[ 1074], 99.50th=[ 1139], 99.90th=[ 1156], 99.95th=[ 1156], 00:18:23.130 | 99.99th=[ 1156] 00:18:23.130 bw ( KiB/s): min= 272, max= 3824, per=100.00%, avg=2048.00, stdev=2511.64, samples=2 00:18:23.130 iops : min= 68, max= 956, avg=512.00, stdev=627.91, samples=2 00:18:23.130 lat (usec) : 750=1.15%, 1000=88.36% 00:18:23.130 lat (msec) : 2=8.21%, 50=2.29% 00:18:23.130 cpu : usr=0.68%, sys=1.83%, ctx=528, majf=0, minf=1 00:18:23.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:23.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.130 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:23.130 00:18:23.130 Run status group 0 (all jobs): 00:18:23.130 READ: bw=46.2KiB/s (47.4kB/s), 46.2KiB/s-46.2KiB/s (47.4kB/s-47.4kB/s), io=48.0KiB (49.2kB), run=1038-1038msec 00:18:23.130 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:18:23.130 00:18:23.130 Disk stats (read/write): 00:18:23.130 nvme0n1: ios=33/512, merge=0/0, ticks=1304/450, in_queue=1754, util=98.90% 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.130 rmmod nvme_tcp 00:18:23.130 rmmod nvme_fabrics 00:18:23.130 rmmod nvme_keyring 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2571080 ']' 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2571080 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 2571080 ']' 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 2571080 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2571080 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2571080' 00:18:23.130 killing process with pid 2571080 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 2571080 00:18:23.130 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 2571080 00:18:23.391 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.391 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.391 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.391 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.391 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.391 08:56:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.391 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.391 08:56:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.305 08:56:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:25.305 00:18:25.305 real 0m17.525s 00:18:25.305 user 0m49.173s 00:18:25.305 sys 0m6.082s 00:18:25.305 08:56:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:25.305 08:56:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.305 ************************************ 00:18:25.305 END TEST nvmf_nmic 00:18:25.305 ************************************ 00:18:25.566 08:56:47 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:25.566 08:56:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:25.567 08:56:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:25.567 08:56:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.567 ************************************ 00:18:25.567 START TEST nvmf_fio_target 00:18:25.567 ************************************ 00:18:25.567 08:56:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:25.567 * Looking for test storage... 00:18:25.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:25.567 08:56:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:33.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:33.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:33.707 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:33.707 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:33.707 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.708 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.708 08:56:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:33.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.740 ms 00:18:33.708 00:18:33.708 --- 10.0.0.2 ping statistics --- 00:18:33.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.708 rtt min/avg/max/mdev = 0.740/0.740/0.740/0.000 ms 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.468 ms 00:18:33.708 00:18:33.708 --- 10.0.0.1 ping statistics --- 00:18:33.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.708 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2576956 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2576956 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 2576956 ']' 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:33.708 08:56:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.708 [2024-06-09 08:56:55.412939] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:18:33.708 [2024-06-09 08:56:55.413001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.708 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.708 [2024-06-09 08:56:55.483045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.708 [2024-06-09 08:56:55.558018] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.708 [2024-06-09 08:56:55.558055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.708 [2024-06-09 08:56:55.558062] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.708 [2024-06-09 08:56:55.558068] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.708 [2024-06-09 08:56:55.558074] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.708 [2024-06-09 08:56:55.558209] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.708 [2024-06-09 08:56:55.558336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.708 [2024-06-09 08:56:55.558496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.708 [2024-06-09 08:56:55.558496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.708 08:56:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:33.708 08:56:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:18:33.708 08:56:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.708 08:56:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:33.708 08:56:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.708 08:56:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.708 08:56:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:33.967 [2024-06-09 08:56:56.370411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.967 08:56:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.226 08:56:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:34.226 08:56:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.226 08:56:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:34.226 08:56:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.485 08:56:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:34.485 08:56:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.745 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:34.745 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:34.745 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.004 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:35.004 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.279 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:35.279 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.279 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:35.279 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:35.547 08:56:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:35.808 08:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:35.808 08:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.808 08:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:35.808 08:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:36.068 08:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.068 [2024-06-09 08:56:58.605189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.329 08:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:36.329 08:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:36.589 08:56:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:37.974 08:57:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:37.974 08:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:18:37.974 08:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.974 08:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:18:37.974 08:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:18:37.974 08:57:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:18:40.519 08:57:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:40.519 08:57:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:40.519 08:57:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:40.519 08:57:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:18:40.519 08:57:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.519 08:57:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:18:40.519 08:57:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:40.519 [global] 00:18:40.519 thread=1 00:18:40.519 invalidate=1 00:18:40.519 rw=write 00:18:40.519 time_based=1 00:18:40.519 runtime=1 00:18:40.519 ioengine=libaio 00:18:40.519 direct=1 00:18:40.519 bs=4096 00:18:40.519 iodepth=1 00:18:40.519 norandommap=0 00:18:40.519 numjobs=1 00:18:40.519 00:18:40.519 verify_dump=1 00:18:40.519 verify_backlog=512 00:18:40.519 verify_state_save=0 00:18:40.519 do_verify=1 00:18:40.519 verify=crc32c-intel 00:18:40.519 [job0] 00:18:40.519 filename=/dev/nvme0n1 00:18:40.519 [job1] 00:18:40.519 filename=/dev/nvme0n2 00:18:40.519 [job2] 00:18:40.519 filename=/dev/nvme0n3 00:18:40.519 [job3] 00:18:40.519 filename=/dev/nvme0n4 00:18:40.519 Could not set queue depth (nvme0n1) 00:18:40.519 Could not set queue depth (nvme0n2) 00:18:40.519 Could not set queue depth (nvme0n3) 00:18:40.519 Could not set queue depth (nvme0n4) 00:18:40.519 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:40.519 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:40.519 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:40.519 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:40.519 fio-3.35 00:18:40.519 Starting 4 threads 00:18:41.927 00:18:41.927 job0: (groupid=0, jobs=1): err= 0: pid=2578729: Sun Jun 9 08:57:04 2024 00:18:41.927 read: IOPS=11, BW=47.0KiB/s (48.1kB/s)(48.0KiB/1021msec) 00:18:41.927 slat (nsec): min=25005, max=25254, avg=25128.08, stdev=83.99 00:18:41.927 clat (usec): min=41928, max=42996, avg=42461.28, stdev=500.43 00:18:41.927 lat (usec): min=41953, max=43021, avg=42486.40, stdev=500.46 00:18:41.927 clat percentiles (usec): 00:18:41.927 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:18:41.927 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:18:41.927 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:18:41.927 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:41.927 | 99.99th=[43254] 00:18:41.927 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:18:41.927 slat (nsec): min=11910, max=53445, avg=33959.74, stdev=2341.00 00:18:41.927 clat (usec): min=637, max=1183, avg=956.07, stdev=76.55 00:18:41.927 lat (usec): min=671, max=1217, avg=990.03, stdev=76.44 00:18:41.927 clat percentiles (usec): 00:18:41.927 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 857], 20.00th=[ 889], 00:18:41.927 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 988], 00:18:41.927 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1057], 00:18:41.927 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1188], 00:18:41.927 | 99.99th=[ 1188] 00:18:41.927 bw ( KiB/s): min= 80, max= 4016, per=25.73%, avg=2048.00, stdev=2783.17, samples=2 00:18:41.927 iops : min= 20, max= 1004, avg=512.00, stdev=695.79, samples=2 00:18:41.927 lat (usec) : 750=0.76%, 1000=66.41% 00:18:41.927 lat (msec) : 2=30.53%, 50=2.29% 00:18:41.927 cpu : usr=0.78%, sys=1.67%, ctx=527, majf=0, minf=1 00:18:41.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:41.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.927 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:41.927 job1: (groupid=0, jobs=1): err= 0: pid=2578747: Sun Jun 9 08:57:04 2024 00:18:41.927 read: IOPS=11, BW=47.3KiB/s (48.5kB/s)(48.0KiB/1014msec) 00:18:41.927 slat (nsec): min=25922, max=26248, avg=26079.50, stdev=101.74 00:18:41.927 clat (usec): min=41925, max=43109, avg=42057.29, stdev=332.00 00:18:41.927 lat (usec): min=41951, max=43136, avg=42083.37, stdev=332.03 00:18:41.927 clat percentiles (usec): 00:18:41.927 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:18:41.927 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:41.927 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:18:41.927 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:41.927 | 99.99th=[43254] 00:18:41.927 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:18:41.927 slat (nsec): min=10274, max=52498, avg=34271.85, stdev=3093.15 00:18:41.927 clat (usec): min=601, max=2274, avg=950.41, stdev=110.36 00:18:41.927 lat (usec): min=636, max=2322, avg=984.68, stdev=111.00 00:18:41.927 clat percentiles (usec): 00:18:41.927 | 1.00th=[ 676], 5.00th=[ 783], 10.00th=[ 840], 20.00th=[ 873], 00:18:41.927 | 30.00th=[ 906], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 979], 00:18:41.927 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:18:41.927 | 99.00th=[ 1139], 99.50th=[ 1188], 99.90th=[ 2278], 99.95th=[ 2278], 00:18:41.927 | 99.99th=[ 2278] 00:18:41.927 bw ( KiB/s): min= 64, max= 4032, per=25.73%, avg=2048.00, stdev=2805.80, samples=2 00:18:41.927 iops : min= 16, max= 1008, avg=512.00, stdev=701.45, samples=2 00:18:41.927 lat (usec) : 750=2.67%, 1000=64.50% 00:18:41.927 lat (msec) : 2=30.34%, 4=0.19%, 50=2.29% 00:18:41.927 cpu : usr=0.79%, sys=2.47%, ctx=526, majf=0, minf=1 00:18:41.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:41.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.927 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:41.927 job2: (groupid=0, jobs=1): err= 0: pid=2578767: Sun Jun 9 08:57:04 2024 00:18:41.927 read: IOPS=11, BW=46.6KiB/s (47.8kB/s)(48.0KiB/1029msec) 00:18:41.927 slat (nsec): min=26843, max=43602, avg=28528.58, stdev=4762.17 00:18:41.927 clat (usec): min=41895, max=44015, avg=42357.12, stdev=656.57 00:18:41.927 lat (usec): min=41922, max=44042, avg=42385.65, stdev=657.34 00:18:41.927 clat percentiles (usec): 00:18:41.927 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:18:41.927 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:41.927 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43779], 00:18:41.927 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:18:41.927 | 99.99th=[43779] 00:18:41.927 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:18:41.927 slat (usec): min=10, max=2806, avg=42.84, stdev=124.35 00:18:41.927 clat (usec): min=642, max=1672, avg=964.44, stdev=110.63 00:18:41.927 lat (usec): min=678, max=3846, avg=1007.28, stdev=170.09 00:18:41.927 clat percentiles (usec): 00:18:41.927 | 1.00th=[ 717], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 881], 00:18:41.927 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:18:41.927 | 70.00th=[ 996], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1139], 00:18:41.927 | 99.00th=[ 1319], 99.50th=[ 1418], 99.90th=[ 1680], 99.95th=[ 1680], 00:18:41.927 | 99.99th=[ 1680] 00:18:41.927 bw ( KiB/s): min= 160, max= 3936, per=25.73%, avg=2048.00, stdev=2670.04, samples=2 00:18:41.927 iops : min= 40, max= 984, avg=512.00, stdev=667.51, samples=2 00:18:41.927 lat (usec) : 750=2.10%, 1000=67.18% 00:18:41.927 lat (msec) : 2=28.44%, 50=2.29% 00:18:41.927 cpu : usr=0.68%, sys=2.72%, ctx=529, majf=0, minf=1 00:18:41.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:41.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.927 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:41.927 job3: (groupid=0, jobs=1): err= 0: pid=2578773: Sun Jun 9 08:57:04 2024 00:18:41.927 read: IOPS=10, BW=42.8KiB/s (43.9kB/s)(44.0KiB/1027msec) 00:18:41.927 slat (nsec): min=25510, max=26086, avg=25705.09, stdev=164.47 00:18:41.927 clat (usec): min=41932, max=42966, avg=42435.42, stdev=490.25 00:18:41.927 lat (usec): min=41958, max=42991, avg=42461.13, stdev=490.32 00:18:41.927 clat percentiles (usec): 00:18:41.927 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:18:41.927 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:18:41.927 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:18:41.927 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:41.927 | 99.99th=[42730] 00:18:41.927 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:18:41.927 slat (usec): min=11, max=45924, avg=125.24, stdev=2028.01 00:18:41.927 clat (usec): min=629, max=1195, avg=958.97, stdev=72.30 00:18:41.927 lat (usec): min=662, max=46933, avg=1084.21, stdev=2031.52 00:18:41.927 clat percentiles (usec): 00:18:41.927 | 1.00th=[ 734], 5.00th=[ 848], 10.00th=[ 865], 20.00th=[ 898], 00:18:41.927 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:18:41.927 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1029], 95.00th=[ 1057], 00:18:41.927 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1188], 99.95th=[ 1188], 00:18:41.928 | 99.99th=[ 1188] 00:18:41.928 bw ( KiB/s): min= 480, max= 3616, per=25.73%, avg=2048.00, stdev=2217.49, samples=2 00:18:41.928 iops : min= 120, max= 904, avg=512.00, stdev=554.37, samples=2 00:18:41.928 lat (usec) : 750=1.34%, 1000=68.64% 00:18:41.928 lat (msec) : 2=27.92%, 50=2.10% 00:18:41.928 cpu : usr=0.00%, sys=2.53%, ctx=526, majf=0, minf=1 00:18:41.928 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:41.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.928 issued rwts: total=11,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.928 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:41.928 00:18:41.928 Run status group 0 (all jobs): 00:18:41.928 READ: bw=183KiB/s (187kB/s), 42.8KiB/s-47.3KiB/s (43.9kB/s-48.5kB/s), io=188KiB (193kB), run=1014-1029msec 00:18:41.928 WRITE: bw=7961KiB/s (8152kB/s), 1990KiB/s-2020KiB/s (2038kB/s-2068kB/s), io=8192KiB (8389kB), run=1014-1029msec 00:18:41.928 00:18:41.928 Disk stats (read/write): 00:18:41.928 nvme0n1: ios=29/512, merge=0/0, ticks=1149/495, in_queue=1644, util=84.27% 00:18:41.928 nvme0n2: ios=29/512, merge=0/0, ticks=1180/479, in_queue=1659, util=88.16% 00:18:41.928 nvme0n3: ios=57/512, merge=0/0, ticks=427/471, in_queue=898, util=94.93% 00:18:41.928 nvme0n4: ios=56/512, merge=0/0, ticks=638/505, in_queue=1143, util=96.79% 00:18:41.928 08:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:41.928 [global] 00:18:41.928 thread=1 00:18:41.928 invalidate=1 00:18:41.928 rw=randwrite 00:18:41.928 time_based=1 00:18:41.928 runtime=1 00:18:41.928 ioengine=libaio 00:18:41.928 direct=1 00:18:41.928 bs=4096 00:18:41.928 iodepth=1 00:18:41.928 norandommap=0 00:18:41.928 numjobs=1 00:18:41.928 00:18:41.928 verify_dump=1 00:18:41.928 verify_backlog=512 00:18:41.928 verify_state_save=0 00:18:41.928 do_verify=1 00:18:41.928 verify=crc32c-intel 00:18:41.928 [job0] 00:18:41.928 filename=/dev/nvme0n1 00:18:41.928 [job1] 00:18:41.928 filename=/dev/nvme0n2 00:18:41.928 [job2] 00:18:41.928 filename=/dev/nvme0n3 00:18:41.928 [job3] 00:18:41.928 filename=/dev/nvme0n4 00:18:41.928 Could not set queue depth (nvme0n1) 00:18:41.928 Could not set queue depth (nvme0n2) 00:18:41.928 Could not set queue depth (nvme0n3) 00:18:41.928 Could not set queue depth (nvme0n4) 00:18:42.188 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:42.188 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:42.188 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:42.188 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:42.188 fio-3.35 00:18:42.188 Starting 4 threads 00:18:43.607 00:18:43.607 job0: (groupid=0, jobs=1): err= 0: pid=2579208: Sun Jun 9 08:57:05 2024 00:18:43.607 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:43.607 slat (nsec): min=6586, max=59642, avg=27019.15, stdev=4943.78 00:18:43.607 clat (usec): min=567, max=1424, avg=867.22, stdev=101.32 00:18:43.607 lat (usec): min=594, max=1451, avg=894.24, stdev=101.87 00:18:43.607 clat percentiles (usec): 00:18:43.607 | 1.00th=[ 652], 5.00th=[ 693], 10.00th=[ 734], 20.00th=[ 783], 00:18:43.607 | 30.00th=[ 824], 40.00th=[ 848], 50.00th=[ 873], 60.00th=[ 889], 00:18:43.607 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 979], 95.00th=[ 1020], 00:18:43.607 | 99.00th=[ 1139], 99.50th=[ 1188], 99.90th=[ 1418], 99.95th=[ 1418], 00:18:43.607 | 99.99th=[ 1418] 00:18:43.607 write: IOPS=779, BW=3117KiB/s (3192kB/s)(3120KiB/1001msec); 0 zone resets 00:18:43.607 slat (nsec): min=4031, max=69323, avg=27755.71, stdev=11105.25 00:18:43.607 clat (usec): min=275, max=4111, avg=654.84, stdev=232.37 00:18:43.607 lat (usec): min=285, max=4116, avg=682.59, stdev=229.12 00:18:43.607 clat percentiles (usec): 00:18:43.607 | 1.00th=[ 326], 5.00th=[ 412], 10.00th=[ 449], 20.00th=[ 506], 00:18:43.607 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 644], 00:18:43.607 | 70.00th=[ 693], 80.00th=[ 766], 90.00th=[ 979], 95.00th=[ 1057], 00:18:43.607 | 99.00th=[ 1205], 99.50th=[ 1631], 99.90th=[ 4113], 99.95th=[ 4113], 00:18:43.607 | 99.99th=[ 4113] 00:18:43.607 bw ( KiB/s): min= 4096, max= 4096, per=45.41%, avg=4096.00, stdev= 0.00, samples=1 00:18:43.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:43.607 lat (usec) : 500=11.07%, 750=40.71%, 1000=40.02% 00:18:43.607 lat (msec) : 2=8.13%, 10=0.08% 00:18:43.607 cpu : usr=2.60%, sys=4.70%, ctx=1294, majf=0, minf=1 00:18:43.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.607 issued rwts: total=512,780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.607 job1: (groupid=0, jobs=1): err= 0: pid=2579217: Sun Jun 9 08:57:05 2024 00:18:43.607 read: IOPS=11, BW=46.7KiB/s (47.9kB/s)(48.0KiB/1027msec) 00:18:43.607 slat (nsec): min=24025, max=24372, avg=24151.92, stdev=98.48 00:18:43.607 clat (usec): min=41954, max=43006, avg=42644.27, stdev=441.49 00:18:43.607 lat (usec): min=41978, max=43030, avg=42668.42, stdev=441.43 00:18:43.607 clat percentiles (usec): 00:18:43.607 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:18:43.607 | 30.00th=[42206], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:18:43.607 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:18:43.607 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:43.607 | 99.99th=[43254] 00:18:43.607 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:18:43.607 slat (nsec): min=10475, max=68080, avg=30167.79, stdev=2304.31 00:18:43.607 clat (usec): min=741, max=1215, avg=967.30, stdev=71.32 00:18:43.607 lat (usec): min=770, max=1245, avg=997.47, stdev=71.03 00:18:43.607 clat percentiles (usec): 00:18:43.607 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 906], 00:18:43.607 | 30.00th=[ 930], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:18:43.607 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1057], 00:18:43.607 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1221], 99.95th=[ 1221], 00:18:43.607 | 99.99th=[ 1221] 00:18:43.607 bw ( KiB/s): min= 112, max= 3984, per=22.70%, avg=2048.00, stdev=2737.92, samples=2 00:18:43.607 iops : min= 28, max= 996, avg=512.00, stdev=684.48, samples=2 00:18:43.607 lat (usec) : 750=0.76%, 1000=61.07% 00:18:43.607 lat (msec) : 2=35.88%, 50=2.29% 00:18:43.607 cpu : usr=0.68%, sys=1.66%, ctx=524, majf=0, minf=1 00:18:43.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.607 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.607 job2: (groupid=0, jobs=1): err= 0: pid=2579234: Sun Jun 9 08:57:05 2024 00:18:43.607 read: IOPS=321, BW=1285KiB/s (1316kB/s)(1288KiB/1002msec) 00:18:43.607 slat (nsec): min=24919, max=46344, avg=26142.42, stdev=3618.07 00:18:43.607 clat (usec): min=1136, max=1647, avg=1471.21, stdev=65.54 00:18:43.607 lat (usec): min=1162, max=1673, avg=1497.35, stdev=65.37 00:18:43.607 clat percentiles (usec): 00:18:43.607 | 1.00th=[ 1303], 5.00th=[ 1352], 10.00th=[ 1385], 20.00th=[ 1434], 00:18:43.607 | 30.00th=[ 1450], 40.00th=[ 1467], 50.00th=[ 1467], 60.00th=[ 1500], 00:18:43.607 | 70.00th=[ 1500], 80.00th=[ 1516], 90.00th=[ 1549], 95.00th=[ 1565], 00:18:43.607 | 99.00th=[ 1614], 99.50th=[ 1631], 99.90th=[ 1647], 99.95th=[ 1647], 00:18:43.607 | 99.99th=[ 1647] 00:18:43.607 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:43.608 slat (nsec): min=9948, max=48697, avg=32055.14, stdev=2665.76 00:18:43.608 clat (usec): min=734, max=1351, avg=964.52, stdev=76.05 00:18:43.608 lat (usec): min=756, max=1383, avg=996.57, stdev=76.28 00:18:43.608 clat percentiles (usec): 00:18:43.608 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 873], 20.00th=[ 898], 00:18:43.608 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 996], 00:18:43.608 | 70.00th=[ 1012], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:18:43.608 | 99.00th=[ 1172], 99.50th=[ 1254], 99.90th=[ 1352], 99.95th=[ 1352], 00:18:43.608 | 99.99th=[ 1352] 00:18:43.608 bw ( KiB/s): min= 3976, max= 3976, per=44.08%, avg=3976.00, stdev= 0.00, samples=1 00:18:43.608 iops : min= 994, max= 994, avg=994.00, stdev= 0.00, samples=1 00:18:43.608 lat (usec) : 750=0.24%, 1000=39.93% 00:18:43.608 lat (msec) : 2=59.83% 00:18:43.608 cpu : usr=0.80%, sys=3.10%, ctx=836, majf=0, minf=1 00:18:43.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.608 issued rwts: total=322,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.608 job3: (groupid=0, jobs=1): err= 0: pid=2579241: Sun Jun 9 08:57:05 2024 00:18:43.608 read: IOPS=375, BW=1501KiB/s (1537kB/s)(1504KiB/1002msec) 00:18:43.608 slat (nsec): min=25950, max=63458, avg=27131.44, stdev=3252.95 00:18:43.608 clat (usec): min=899, max=1464, avg=1283.70, stdev=82.89 00:18:43.608 lat (usec): min=926, max=1507, avg=1310.83, stdev=82.87 00:18:43.608 clat percentiles (usec): 00:18:43.608 | 1.00th=[ 1004], 5.00th=[ 1074], 10.00th=[ 1188], 20.00th=[ 1237], 00:18:43.608 | 30.00th=[ 1270], 40.00th=[ 1287], 50.00th=[ 1303], 60.00th=[ 1303], 00:18:43.608 | 70.00th=[ 1336], 80.00th=[ 1352], 90.00th=[ 1369], 95.00th=[ 1385], 00:18:43.608 | 99.00th=[ 1418], 99.50th=[ 1434], 99.90th=[ 1467], 99.95th=[ 1467], 00:18:43.608 | 99.99th=[ 1467] 00:18:43.608 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:43.608 slat (nsec): min=10057, max=51165, avg=32858.26, stdev=4745.20 00:18:43.608 clat (usec): min=534, max=1210, avg=941.97, stdev=108.57 00:18:43.608 lat (usec): min=571, max=1243, avg=974.83, stdev=109.69 00:18:43.608 clat percentiles (usec): 00:18:43.608 | 1.00th=[ 627], 5.00th=[ 750], 10.00th=[ 783], 20.00th=[ 857], 00:18:43.608 | 30.00th=[ 906], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 988], 00:18:43.608 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:18:43.608 | 99.00th=[ 1172], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:18:43.608 | 99.99th=[ 1205] 00:18:43.608 bw ( KiB/s): min= 4064, max= 4064, per=45.05%, avg=4064.00, stdev= 0.00, samples=1 00:18:43.608 iops : min= 1016, max= 1016, avg=1016.00, stdev= 0.00, samples=1 00:18:43.608 lat (usec) : 750=3.27%, 1000=35.59% 00:18:43.608 lat (msec) : 2=61.15% 00:18:43.608 cpu : usr=2.00%, sys=3.60%, ctx=890, majf=0, minf=1 00:18:43.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.608 issued rwts: total=376,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.608 00:18:43.608 Run status group 0 (all jobs): 00:18:43.608 READ: bw=4759KiB/s (4874kB/s), 46.7KiB/s-2046KiB/s (47.9kB/s-2095kB/s), io=4888KiB (5005kB), run=1001-1027msec 00:18:43.608 WRITE: bw=9020KiB/s (9237kB/s), 1994KiB/s-3117KiB/s (2042kB/s-3192kB/s), io=9264KiB (9486kB), run=1001-1027msec 00:18:43.608 00:18:43.608 Disk stats (read/write): 00:18:43.608 nvme0n1: ios=558/512, merge=0/0, ticks=962/309, in_queue=1271, util=92.59% 00:18:43.608 nvme0n2: ios=57/512, merge=0/0, ticks=409/506, in_queue=915, util=92.05% 00:18:43.608 nvme0n3: ios=251/512, merge=0/0, ticks=1163/509, in_queue=1672, util=96.83% 00:18:43.608 nvme0n4: ios=287/512, merge=0/0, ticks=499/472, in_queue=971, util=97.22% 00:18:43.608 08:57:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:43.608 [global] 00:18:43.608 thread=1 00:18:43.608 invalidate=1 00:18:43.608 rw=write 00:18:43.608 time_based=1 00:18:43.608 runtime=1 00:18:43.608 ioengine=libaio 00:18:43.608 direct=1 00:18:43.608 bs=4096 00:18:43.608 iodepth=128 00:18:43.608 norandommap=0 00:18:43.608 numjobs=1 00:18:43.608 00:18:43.608 verify_dump=1 00:18:43.608 verify_backlog=512 00:18:43.608 verify_state_save=0 00:18:43.608 do_verify=1 00:18:43.608 verify=crc32c-intel 00:18:43.608 [job0] 00:18:43.608 filename=/dev/nvme0n1 00:18:43.608 [job1] 00:18:43.608 filename=/dev/nvme0n2 00:18:43.608 [job2] 00:18:43.608 filename=/dev/nvme0n3 00:18:43.608 [job3] 00:18:43.608 filename=/dev/nvme0n4 00:18:43.608 Could not set queue depth (nvme0n1) 00:18:43.608 Could not set queue depth (nvme0n2) 00:18:43.608 Could not set queue depth (nvme0n3) 00:18:43.608 Could not set queue depth (nvme0n4) 00:18:43.875 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:43.875 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:43.875 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:43.875 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:43.875 fio-3.35 00:18:43.875 Starting 4 threads 00:18:45.286 00:18:45.286 job0: (groupid=0, jobs=1): err= 0: pid=2579725: Sun Jun 9 08:57:07 2024 00:18:45.286 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:18:45.286 slat (nsec): min=906, max=10454k, avg=75226.22, stdev=478633.86 00:18:45.286 clat (usec): min=4036, max=23004, avg=9756.86, stdev=2586.56 00:18:45.286 lat (usec): min=4041, max=23043, avg=9832.09, stdev=2610.89 00:18:45.286 clat percentiles (usec): 00:18:45.286 | 1.00th=[ 4555], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 7570], 00:18:45.286 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10159], 00:18:45.286 | 70.00th=[10552], 80.00th=[11076], 90.00th=[13173], 95.00th=[15139], 00:18:45.286 | 99.00th=[17433], 99.50th=[18482], 99.90th=[18744], 99.95th=[20841], 00:18:45.286 | 99.99th=[22938] 00:18:45.286 write: IOPS=6941, BW=27.1MiB/s (28.4MB/s)(27.3MiB/1006msec); 0 zone resets 00:18:45.286 slat (nsec): min=1590, max=12086k, avg=67707.79, stdev=406503.86 00:18:45.286 clat (usec): min=1246, max=20297, avg=8973.06, stdev=3066.84 00:18:45.286 lat (usec): min=1255, max=20299, avg=9040.77, stdev=3084.57 00:18:45.286 clat percentiles (usec): 00:18:45.286 | 1.00th=[ 2573], 5.00th=[ 3949], 10.00th=[ 4948], 20.00th=[ 6652], 00:18:45.286 | 30.00th=[ 7635], 40.00th=[ 8356], 50.00th=[ 9110], 60.00th=[ 9634], 00:18:45.286 | 70.00th=[10290], 80.00th=[10552], 90.00th=[12256], 95.00th=[15008], 00:18:45.286 | 99.00th=[18482], 99.50th=[19530], 99.90th=[19530], 99.95th=[20317], 00:18:45.286 | 99.99th=[20317] 00:18:45.286 bw ( KiB/s): min=26176, max=28614, per=28.57%, avg=27395.00, stdev=1723.93, samples=2 00:18:45.286 iops : min= 6544, max= 7153, avg=6848.50, stdev=430.63, samples=2 00:18:45.286 lat (msec) : 2=0.17%, 4=2.43%, 10=57.58%, 20=39.74%, 50=0.07% 00:18:45.286 cpu : usr=4.68%, sys=5.07%, ctx=757, majf=0, minf=1 00:18:45.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:45.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.286 issued rwts: total=6656,6983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.286 job1: (groupid=0, jobs=1): err= 0: pid=2579729: Sun Jun 9 08:57:07 2024 00:18:45.286 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:18:45.286 slat (nsec): min=906, max=29660k, avg=81143.59, stdev=788700.25 00:18:45.286 clat (usec): min=2555, max=43113, avg=11951.59, stdev=7883.27 00:18:45.286 lat (usec): min=2560, max=43126, avg=12032.73, stdev=7917.03 00:18:45.286 clat percentiles (usec): 00:18:45.286 | 1.00th=[ 3392], 5.00th=[ 4817], 10.00th=[ 6587], 20.00th=[ 7570], 00:18:45.286 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9765], 00:18:45.286 | 70.00th=[11994], 80.00th=[15270], 90.00th=[21365], 95.00th=[33817], 00:18:45.286 | 99.00th=[39584], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:45.286 | 99.99th=[43254] 00:18:45.286 write: IOPS=5467, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1006msec); 0 zone resets 00:18:45.286 slat (nsec): min=1583, max=33436k, avg=84064.51, stdev=868064.28 00:18:45.286 clat (usec): min=909, max=58162, avg=11970.12, stdev=8616.90 00:18:45.286 lat (usec): min=918, max=58236, avg=12054.18, stdev=8659.47 00:18:45.286 clat percentiles (usec): 00:18:45.286 | 1.00th=[ 2802], 5.00th=[ 3884], 10.00th=[ 4948], 20.00th=[ 6194], 00:18:45.286 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[10028], 00:18:45.286 | 70.00th=[12518], 80.00th=[16712], 90.00th=[26084], 95.00th=[35390], 00:18:45.286 | 99.00th=[38011], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:18:45.286 | 99.99th=[57934] 00:18:45.286 bw ( KiB/s): min=19376, max=23600, per=22.41%, avg=21488.00, stdev=2986.82, samples=2 00:18:45.286 iops : min= 4844, max= 5900, avg=5372.00, stdev=746.70, samples=2 00:18:45.286 lat (usec) : 1000=0.07% 00:18:45.286 lat (msec) : 2=0.27%, 4=4.37%, 10=56.48%, 20=25.46%, 50=13.34% 00:18:45.286 lat (msec) : 100=0.01% 00:18:45.286 cpu : usr=3.68%, sys=5.47%, ctx=427, majf=0, minf=1 00:18:45.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:45.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.286 issued rwts: total=5120,5500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.286 job2: (groupid=0, jobs=1): err= 0: pid=2579736: Sun Jun 9 08:57:07 2024 00:18:45.286 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:18:45.286 slat (nsec): min=934, max=11822k, avg=81020.28, stdev=515636.68 00:18:45.286 clat (usec): min=5058, max=20917, avg=10443.02, stdev=2187.94 00:18:45.286 lat (usec): min=5064, max=20929, avg=10524.04, stdev=2223.99 00:18:45.286 clat percentiles (usec): 00:18:45.286 | 1.00th=[ 6325], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8848], 00:18:45.286 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[10683], 00:18:45.286 | 70.00th=[11207], 80.00th=[11863], 90.00th=[12911], 95.00th=[14484], 00:18:45.286 | 99.00th=[18220], 99.50th=[19268], 99.90th=[20841], 99.95th=[20841], 00:18:45.286 | 99.99th=[20841] 00:18:45.286 write: IOPS=6488, BW=25.3MiB/s (26.6MB/s)(25.4MiB/1004msec); 0 zone resets 00:18:45.286 slat (nsec): min=1657, max=7268.0k, avg=73011.96, stdev=445752.46 00:18:45.286 clat (usec): min=663, max=19955, avg=9598.84, stdev=1946.88 00:18:45.286 lat (usec): min=3992, max=19982, avg=9671.86, stdev=1987.10 00:18:45.286 clat percentiles (usec): 00:18:45.286 | 1.00th=[ 4817], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 8225], 00:18:45.286 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:18:45.286 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12649], 95.00th=[13566], 00:18:45.286 | 99.00th=[15401], 99.50th=[17171], 99.90th=[17957], 99.95th=[19792], 00:18:45.286 | 99.99th=[20055] 00:18:45.286 bw ( KiB/s): min=24248, max=26840, per=26.64%, avg=25544.00, stdev=1832.82, samples=2 00:18:45.286 iops : min= 6062, max= 6710, avg=6386.00, stdev=458.21, samples=2 00:18:45.286 lat (usec) : 750=0.01% 00:18:45.286 lat (msec) : 4=0.01%, 10=57.33%, 20=42.47%, 50=0.18% 00:18:45.286 cpu : usr=4.09%, sys=6.58%, ctx=544, majf=0, minf=1 00:18:45.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:45.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.286 issued rwts: total=6144,6514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.286 job3: (groupid=0, jobs=1): err= 0: pid=2579737: Sun Jun 9 08:57:07 2024 00:18:45.286 read: IOPS=5087, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1005msec) 00:18:45.286 slat (nsec): min=946, max=25882k, avg=105331.61, stdev=887703.57 00:18:45.286 clat (usec): min=2991, max=60686, avg=13412.86, stdev=6750.11 00:18:45.286 lat (usec): min=6209, max=60693, avg=13518.19, stdev=6818.74 00:18:45.286 clat percentiles (usec): 00:18:45.286 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8979], 00:18:45.286 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11338], 60.00th=[12518], 00:18:45.286 | 70.00th=[13960], 80.00th=[16450], 90.00th=[20055], 95.00th=[25035], 00:18:45.286 | 99.00th=[36439], 99.50th=[39584], 99.90th=[60556], 99.95th=[60556], 00:18:45.286 | 99.99th=[60556] 00:18:45.286 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:18:45.286 slat (nsec): min=1626, max=15735k, avg=75595.18, stdev=516608.23 00:18:45.286 clat (usec): min=981, max=60628, avg=11499.30, stdev=5808.13 00:18:45.286 lat (usec): min=990, max=60635, avg=11574.90, stdev=5820.22 00:18:45.286 clat percentiles (usec): 00:18:45.286 | 1.00th=[ 3621], 5.00th=[ 5407], 10.00th=[ 6456], 20.00th=[ 7439], 00:18:45.286 | 30.00th=[ 8356], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[11338], 00:18:45.286 | 70.00th=[12780], 80.00th=[14615], 90.00th=[17957], 95.00th=[20055], 00:18:45.286 | 99.00th=[28705], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:18:45.286 | 99.99th=[60556] 00:18:45.286 bw ( KiB/s): min=17272, max=23688, per=21.36%, avg=20480.00, stdev=4536.80, samples=2 00:18:45.286 iops : min= 4318, max= 5922, avg=5120.00, stdev=1134.20, samples=2 00:18:45.286 lat (usec) : 1000=0.02% 00:18:45.286 lat (msec) : 2=0.15%, 4=0.47%, 10=38.81%, 20=52.50%, 50=7.47% 00:18:45.286 lat (msec) : 100=0.60% 00:18:45.286 cpu : usr=3.29%, sys=5.88%, ctx=464, majf=0, minf=1 00:18:45.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:45.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.287 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.287 00:18:45.287 Run status group 0 (all jobs): 00:18:45.287 READ: bw=89.4MiB/s (93.8MB/s), 19.9MiB/s-25.8MiB/s (20.8MB/s-27.1MB/s), io=90.0MiB (94.3MB), run=1004-1006msec 00:18:45.287 WRITE: bw=93.6MiB/s (98.2MB/s), 19.9MiB/s-27.1MiB/s (20.9MB/s-28.4MB/s), io=94.2MiB (98.8MB), run=1004-1006msec 00:18:45.287 00:18:45.287 Disk stats (read/write): 00:18:45.287 nvme0n1: ios=5681/5804, merge=0/0, ticks=41045/36827, in_queue=77872, util=84.57% 00:18:45.287 nvme0n2: ios=4562/4608, merge=0/0, ticks=44145/47276, in_queue=91421, util=88.38% 00:18:45.287 nvme0n3: ios=5144/5122, merge=0/0, ticks=27007/23126, in_queue=50133, util=94.94% 00:18:45.287 nvme0n4: ios=4068/4096, merge=0/0, ticks=52019/40891, in_queue=92910, util=96.80% 00:18:45.287 08:57:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:45.287 [global] 00:18:45.287 thread=1 00:18:45.287 invalidate=1 00:18:45.287 rw=randwrite 00:18:45.287 time_based=1 00:18:45.287 runtime=1 00:18:45.287 ioengine=libaio 00:18:45.287 direct=1 00:18:45.287 bs=4096 00:18:45.287 iodepth=128 00:18:45.287 norandommap=0 00:18:45.287 numjobs=1 00:18:45.287 00:18:45.287 verify_dump=1 00:18:45.287 verify_backlog=512 00:18:45.287 verify_state_save=0 00:18:45.287 do_verify=1 00:18:45.287 verify=crc32c-intel 00:18:45.287 [job0] 00:18:45.287 filename=/dev/nvme0n1 00:18:45.287 [job1] 00:18:45.287 filename=/dev/nvme0n2 00:18:45.287 [job2] 00:18:45.287 filename=/dev/nvme0n3 00:18:45.287 [job3] 00:18:45.287 filename=/dev/nvme0n4 00:18:45.287 Could not set queue depth (nvme0n1) 00:18:45.287 Could not set queue depth (nvme0n2) 00:18:45.287 Could not set queue depth (nvme0n3) 00:18:45.287 Could not set queue depth (nvme0n4) 00:18:45.547 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.547 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.547 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.547 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.547 fio-3.35 00:18:45.547 Starting 4 threads 00:18:46.947 00:18:46.947 job0: (groupid=0, jobs=1): err= 0: pid=2580235: Sun Jun 9 08:57:09 2024 00:18:46.947 read: IOPS=6482, BW=25.3MiB/s (26.6MB/s)(25.4MiB/1003msec) 00:18:46.947 slat (nsec): min=909, max=56770k, avg=74392.93, stdev=848454.09 00:18:46.947 clat (usec): min=1324, max=87551, avg=10076.86, stdev=10002.17 00:18:46.947 lat (usec): min=1566, max=87579, avg=10151.25, stdev=10062.80 00:18:46.947 clat percentiles (usec): 00:18:46.947 | 1.00th=[ 2245], 5.00th=[ 4424], 10.00th=[ 5538], 20.00th=[ 6456], 00:18:46.947 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8586], 00:18:46.947 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[13042], 95.00th=[21890], 00:18:46.947 | 99.00th=[73925], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:18:46.947 | 99.99th=[87557] 00:18:46.947 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:18:46.947 slat (nsec): min=1531, max=7273.8k, avg=65928.46, stdev=381727.20 00:18:46.947 clat (usec): min=769, max=26579, avg=9244.18, stdev=4668.66 00:18:46.947 lat (usec): min=1260, max=26587, avg=9310.11, stdev=4692.57 00:18:46.947 clat percentiles (usec): 00:18:46.947 | 1.00th=[ 2409], 5.00th=[ 3392], 10.00th=[ 4293], 20.00th=[ 5604], 00:18:46.947 | 30.00th=[ 6259], 40.00th=[ 7111], 50.00th=[ 8455], 60.00th=[ 9372], 00:18:46.947 | 70.00th=[10814], 80.00th=[11994], 90.00th=[15795], 95.00th=[19792], 00:18:46.947 | 99.00th=[23987], 99.50th=[24773], 99.90th=[25822], 99.95th=[25822], 00:18:46.947 | 99.99th=[26608] 00:18:46.947 bw ( KiB/s): min=20480, max=32768, per=35.50%, avg=26624.00, stdev=8688.93, samples=2 00:18:46.947 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:18:46.947 lat (usec) : 1000=0.01% 00:18:46.947 lat (msec) : 2=0.68%, 4=4.56%, 10=67.61%, 20=22.02%, 50=4.16% 00:18:46.947 lat (msec) : 100=0.97% 00:18:46.947 cpu : usr=2.59%, sys=5.89%, ctx=744, majf=0, minf=1 00:18:46.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:46.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.947 issued rwts: total=6502,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.947 job1: (groupid=0, jobs=1): err= 0: pid=2580236: Sun Jun 9 08:57:09 2024 00:18:46.947 read: IOPS=5789, BW=22.6MiB/s (23.7MB/s)(23.7MiB/1050msec) 00:18:46.947 slat (nsec): min=893, max=14660k, avg=71172.86, stdev=533025.21 00:18:46.947 clat (usec): min=2509, max=64708, avg=11408.72, stdev=10075.33 00:18:46.947 lat (usec): min=2537, max=71910, avg=11479.89, stdev=10113.31 00:18:46.947 clat percentiles (usec): 00:18:46.947 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 5407], 20.00th=[ 6521], 00:18:46.947 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8225], 60.00th=[ 8979], 00:18:46.947 | 70.00th=[ 9765], 80.00th=[12387], 90.00th=[19792], 95.00th=[35390], 00:18:46.947 | 99.00th=[57410], 99.50th=[64226], 99.90th=[64750], 99.95th=[64750], 00:18:46.947 | 99.99th=[64750] 00:18:46.947 write: IOPS=5851, BW=22.9MiB/s (24.0MB/s)(24.0MiB/1050msec); 0 zone resets 00:18:46.947 slat (nsec): min=1529, max=35049k, avg=84165.24, stdev=675462.37 00:18:46.947 clat (usec): min=872, max=45432, avg=10343.05, stdev=5369.58 00:18:46.947 lat (usec): min=876, max=45437, avg=10427.22, stdev=5407.73 00:18:46.947 clat percentiles (usec): 00:18:46.947 | 1.00th=[ 3490], 5.00th=[ 4359], 10.00th=[ 5276], 20.00th=[ 6652], 00:18:46.947 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 9372], 60.00th=[10290], 00:18:46.947 | 70.00th=[11207], 80.00th=[12518], 90.00th=[15795], 95.00th=[20579], 00:18:46.947 | 99.00th=[30802], 99.50th=[42206], 99.90th=[43254], 99.95th=[45351], 00:18:46.947 | 99.99th=[45351] 00:18:46.947 bw ( KiB/s): min=20896, max=28256, per=32.77%, avg=24576.00, stdev=5204.31, samples=2 00:18:46.947 iops : min= 5224, max= 7064, avg=6144.00, stdev=1301.08, samples=2 00:18:46.947 lat (usec) : 1000=0.07% 00:18:46.947 lat (msec) : 2=0.21%, 4=1.96%, 10=62.26%, 20=27.96%, 50=6.29% 00:18:46.947 lat (msec) : 100=1.25% 00:18:46.947 cpu : usr=2.76%, sys=5.53%, ctx=551, majf=0, minf=1 00:18:46.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:46.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.947 issued rwts: total=6079,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.947 job2: (groupid=0, jobs=1): err= 0: pid=2580243: Sun Jun 9 08:57:09 2024 00:18:46.947 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:18:46.947 slat (nsec): min=937, max=16704k, avg=124361.34, stdev=880569.84 00:18:46.947 clat (usec): min=6038, max=39793, avg=14583.33, stdev=6083.22 00:18:46.947 lat (usec): min=6045, max=39800, avg=14707.69, stdev=6142.29 00:18:46.947 clat percentiles (usec): 00:18:46.947 | 1.00th=[ 6783], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:18:46.947 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11994], 60.00th=[12256], 00:18:46.947 | 70.00th=[16057], 80.00th=[19792], 90.00th=[25560], 95.00th=[28181], 00:18:46.947 | 99.00th=[31327], 99.50th=[31327], 99.90th=[33424], 99.95th=[39584], 00:18:46.947 | 99.99th=[39584] 00:18:46.947 write: IOPS=3246, BW=12.7MiB/s (13.3MB/s)(12.9MiB/1017msec); 0 zone resets 00:18:46.947 slat (nsec): min=1568, max=11915k, avg=182419.93, stdev=871333.09 00:18:46.947 clat (usec): min=1125, max=74311, avg=25501.05, stdev=16345.63 00:18:46.947 lat (usec): min=1135, max=74315, avg=25683.47, stdev=16445.16 00:18:46.947 clat percentiles (usec): 00:18:46.947 | 1.00th=[ 3720], 5.00th=[ 6521], 10.00th=[ 8160], 20.00th=[12649], 00:18:46.947 | 30.00th=[15926], 40.00th=[18482], 50.00th=[20841], 60.00th=[24511], 00:18:46.947 | 70.00th=[27395], 80.00th=[38011], 90.00th=[51643], 95.00th=[60556], 00:18:46.947 | 99.00th=[71828], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:18:46.947 | 99.99th=[73925] 00:18:46.947 bw ( KiB/s): min=12160, max=13232, per=16.93%, avg=12696.00, stdev=758.02, samples=2 00:18:46.947 iops : min= 3040, max= 3308, avg=3174.00, stdev=189.50, samples=2 00:18:46.947 lat (msec) : 2=0.13%, 4=0.50%, 10=13.65%, 20=50.36%, 50=29.53% 00:18:46.947 lat (msec) : 100=5.84% 00:18:46.947 cpu : usr=2.17%, sys=3.74%, ctx=351, majf=0, minf=1 00:18:46.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:46.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.947 issued rwts: total=3072,3302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.947 job3: (groupid=0, jobs=1): err= 0: pid=2580247: Sun Jun 9 08:57:09 2024 00:18:46.947 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:18:46.947 slat (nsec): min=1003, max=14782k, avg=134520.36, stdev=862932.32 00:18:46.947 clat (usec): min=7815, max=76435, avg=16002.07, stdev=8895.76 00:18:46.947 lat (usec): min=7817, max=76439, avg=16136.59, stdev=8968.06 00:18:46.947 clat percentiles (usec): 00:18:46.947 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11076], 00:18:46.947 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13304], 60.00th=[15008], 00:18:46.947 | 70.00th=[15926], 80.00th=[18744], 90.00th=[22676], 95.00th=[26084], 00:18:46.947 | 99.00th=[71828], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:18:46.947 | 99.99th=[76022] 00:18:46.947 write: IOPS=3522, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1017msec); 0 zone resets 00:18:46.948 slat (nsec): min=1631, max=19603k, avg=159220.48, stdev=873211.21 00:18:46.948 clat (usec): min=6244, max=76422, avg=22233.77, stdev=14407.61 00:18:46.948 lat (usec): min=6252, max=76425, avg=22392.99, stdev=14469.51 00:18:46.948 clat percentiles (usec): 00:18:46.948 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[11076], 00:18:46.948 | 30.00th=[13435], 40.00th=[15795], 50.00th=[17695], 60.00th=[19792], 00:18:46.948 | 70.00th=[22938], 80.00th=[31851], 90.00th=[49021], 95.00th=[53216], 00:18:46.948 | 99.00th=[60556], 99.50th=[68682], 99.90th=[69731], 99.95th=[76022], 00:18:46.948 | 99.99th=[76022] 00:18:46.948 bw ( KiB/s): min=12504, max=15128, per=18.42%, avg=13816.00, stdev=1855.45, samples=2 00:18:46.948 iops : min= 3126, max= 3782, avg=3454.00, stdev=463.86, samples=2 00:18:46.948 lat (msec) : 10=13.23%, 20=57.45%, 50=23.43%, 100=5.89% 00:18:46.948 cpu : usr=1.57%, sys=4.33%, ctx=375, majf=0, minf=1 00:18:46.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:46.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.948 issued rwts: total=3072,3582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.948 00:18:46.948 Run status group 0 (all jobs): 00:18:46.948 READ: bw=69.7MiB/s (73.0MB/s), 11.8MiB/s-25.3MiB/s (12.4MB/s-26.6MB/s), io=73.1MiB (76.7MB), run=1003-1050msec 00:18:46.948 WRITE: bw=73.2MiB/s (76.8MB/s), 12.7MiB/s-25.9MiB/s (13.3MB/s-27.2MB/s), io=76.9MiB (80.6MB), run=1003-1050msec 00:18:46.948 00:18:46.948 Disk stats (read/write): 00:18:46.948 nvme0n1: ios=5505/5632, merge=0/0, ticks=26633/26711, in_queue=53344, util=99.90% 00:18:46.948 nvme0n2: ios=5142/5121, merge=0/0, ticks=26632/24349, in_queue=50981, util=100.00% 00:18:46.948 nvme0n3: ios=2560/2719, merge=0/0, ticks=36744/65350, in_queue=102094, util=88.50% 00:18:46.948 nvme0n4: ios=2601/3039, merge=0/0, ticks=40892/62228, in_queue=103120, util=97.01% 00:18:46.948 08:57:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:46.948 08:57:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2580570 00:18:46.948 08:57:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:46.948 08:57:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:46.948 [global] 00:18:46.948 thread=1 00:18:46.948 invalidate=1 00:18:46.948 rw=read 00:18:46.948 time_based=1 00:18:46.948 runtime=10 00:18:46.948 ioengine=libaio 00:18:46.948 direct=1 00:18:46.948 bs=4096 00:18:46.948 iodepth=1 00:18:46.948 norandommap=1 00:18:46.948 numjobs=1 00:18:46.948 00:18:46.948 [job0] 00:18:46.948 filename=/dev/nvme0n1 00:18:46.948 [job1] 00:18:46.948 filename=/dev/nvme0n2 00:18:46.948 [job2] 00:18:46.948 filename=/dev/nvme0n3 00:18:46.948 [job3] 00:18:46.948 filename=/dev/nvme0n4 00:18:46.948 Could not set queue depth (nvme0n1) 00:18:46.948 Could not set queue depth (nvme0n2) 00:18:46.948 Could not set queue depth (nvme0n3) 00:18:46.948 Could not set queue depth (nvme0n4) 00:18:47.208 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.208 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.208 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.208 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.208 fio-3.35 00:18:47.208 Starting 4 threads 00:18:49.754 08:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:50.015 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2056192, buflen=4096 00:18:50.015 fio: pid=2580791, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:50.015 08:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:50.015 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=11776000, buflen=4096 00:18:50.015 fio: pid=2580785, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:50.015 08:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.015 08:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:50.276 08:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.276 08:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:50.276 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3215360, buflen=4096 00:18:50.276 fio: pid=2580758, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:50.276 08:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.277 08:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:50.538 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3612672, buflen=4096 00:18:50.538 fio: pid=2580767, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:50.538 00:18:50.538 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2580758: Sun Jun 9 08:57:12 2024 00:18:50.538 read: IOPS=266, BW=1064KiB/s (1090kB/s)(3140KiB/2951msec) 00:18:50.538 slat (usec): min=6, max=16323, avg=45.71, stdev=581.35 00:18:50.538 clat (usec): min=618, max=43313, avg=3679.78, stdev=9459.40 00:18:50.538 lat (usec): min=645, max=58983, avg=3725.51, stdev=9562.49 00:18:50.538 clat percentiles (usec): 00:18:50.538 | 1.00th=[ 742], 5.00th=[ 889], 10.00th=[ 1004], 20.00th=[ 1287], 00:18:50.538 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[ 1418], 60.00th=[ 1434], 00:18:50.538 | 70.00th=[ 1467], 80.00th=[ 1483], 90.00th=[ 1549], 95.00th=[41157], 00:18:50.538 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:18:50.538 | 99.99th=[43254] 00:18:50.538 bw ( KiB/s): min= 448, max= 2320, per=17.21%, avg=1120.00, stdev=760.23, samples=5 00:18:50.538 iops : min= 112, max= 580, avg=280.00, stdev=190.06, samples=5 00:18:50.538 lat (usec) : 750=1.02%, 1000=8.78% 00:18:50.538 lat (msec) : 2=84.35%, 50=5.73% 00:18:50.538 cpu : usr=0.24%, sys=0.95%, ctx=787, majf=0, minf=1 00:18:50.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.538 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.538 issued rwts: total=786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.538 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2580767: Sun Jun 9 08:57:12 2024 00:18:50.538 read: IOPS=284, BW=1138KiB/s (1165kB/s)(3528KiB/3101msec) 00:18:50.538 slat (usec): min=7, max=6526, avg=31.78, stdev=218.86 00:18:50.538 clat (usec): min=990, max=43402, avg=3453.27, stdev=8806.69 00:18:50.538 lat (usec): min=1018, max=49006, avg=3484.95, stdev=8842.14 00:18:50.538 clat percentiles (usec): 00:18:50.538 | 1.00th=[ 1188], 5.00th=[ 1270], 10.00th=[ 1319], 20.00th=[ 1369], 00:18:50.538 | 30.00th=[ 1401], 40.00th=[ 1418], 50.00th=[ 1450], 60.00th=[ 1467], 00:18:50.538 | 70.00th=[ 1500], 80.00th=[ 1532], 90.00th=[ 1647], 95.00th=[ 7963], 00:18:50.538 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:18:50.538 | 99.99th=[43254] 00:18:50.538 bw ( KiB/s): min= 88, max= 2704, per=17.58%, avg=1144.67, stdev=1249.80, samples=6 00:18:50.538 iops : min= 22, max= 676, avg=286.17, stdev=312.45, samples=6 00:18:50.538 lat (usec) : 1000=0.11% 00:18:50.538 lat (msec) : 2=94.56%, 4=0.11%, 10=0.11%, 20=0.11%, 50=4.87% 00:18:50.538 cpu : usr=0.23%, sys=0.90%, ctx=886, majf=0, minf=1 00:18:50.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.538 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.538 issued rwts: total=883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.538 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2580785: Sun Jun 9 08:57:12 2024 00:18:50.538 read: IOPS=1045, BW=4182KiB/s (4282kB/s)(11.2MiB/2750msec) 00:18:50.538 slat (nsec): min=5732, max=63048, avg=25049.31, stdev=6300.41 00:18:50.538 clat (usec): min=318, max=43245, avg=918.07, stdev=873.85 00:18:50.538 lat (usec): min=345, max=43269, avg=943.12, stdev=874.04 00:18:50.538 clat percentiles (usec): 00:18:50.538 | 1.00th=[ 529], 5.00th=[ 611], 10.00th=[ 660], 20.00th=[ 750], 00:18:50.538 | 30.00th=[ 832], 40.00th=[ 881], 50.00th=[ 922], 60.00th=[ 947], 00:18:50.538 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1029], 95.00th=[ 1172], 00:18:50.538 | 99.00th=[ 1532], 99.50th=[ 1614], 99.90th=[ 3032], 99.95th=[18220], 00:18:50.538 | 99.99th=[43254] 00:18:50.538 bw ( KiB/s): min= 3896, max= 4384, per=64.74%, avg=4212.80, stdev=193.71, samples=5 00:18:50.538 iops : min= 974, max= 1096, avg=1053.20, stdev=48.43, samples=5 00:18:50.538 lat (usec) : 500=0.42%, 750=19.85%, 1000=63.32% 00:18:50.538 lat (msec) : 2=16.24%, 4=0.07%, 20=0.03%, 50=0.03% 00:18:50.538 cpu : usr=1.13%, sys=4.55%, ctx=2876, majf=0, minf=1 00:18:50.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.538 issued rwts: total=2876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.538 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2580791: Sun Jun 9 08:57:12 2024 00:18:50.539 read: IOPS=194, BW=777KiB/s (796kB/s)(2008KiB/2584msec) 00:18:50.539 slat (nsec): min=6664, max=56307, avg=24349.69, stdev=3600.81 00:18:50.539 clat (usec): min=1110, max=43585, avg=5073.11, stdev=11556.54 00:18:50.539 lat (usec): min=1131, max=43611, avg=5097.46, stdev=11557.04 00:18:50.539 clat percentiles (usec): 00:18:50.539 | 1.00th=[ 1172], 5.00th=[ 1287], 10.00th=[ 1385], 20.00th=[ 1418], 00:18:50.539 | 30.00th=[ 1450], 40.00th=[ 1467], 50.00th=[ 1500], 60.00th=[ 1516], 00:18:50.539 | 70.00th=[ 1549], 80.00th=[ 1614], 90.00th=[ 1876], 95.00th=[42206], 00:18:50.539 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:18:50.539 | 99.99th=[43779] 00:18:50.539 bw ( KiB/s): min= 104, max= 1808, per=11.44%, avg=744.00, stdev=687.70, samples=5 00:18:50.539 iops : min= 26, max= 452, avg=186.00, stdev=171.92, samples=5 00:18:50.539 lat (msec) : 2=90.66%, 4=0.40%, 50=8.75% 00:18:50.539 cpu : usr=0.31%, sys=0.50%, ctx=503, majf=0, minf=2 00:18:50.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.539 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.539 issued rwts: total=503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.539 00:18:50.539 Run status group 0 (all jobs): 00:18:50.539 READ: bw=6506KiB/s (6662kB/s), 777KiB/s-4182KiB/s (796kB/s-4282kB/s), io=19.7MiB (20.7MB), run=2584-3101msec 00:18:50.539 00:18:50.539 Disk stats (read/write): 00:18:50.539 nvme0n1: ios=783/0, merge=0/0, ticks=2761/0, in_queue=2761, util=94.43% 00:18:50.539 nvme0n2: ios=882/0, merge=0/0, ticks=3013/0, in_queue=3013, util=95.29% 00:18:50.539 nvme0n3: ios=2719/0, merge=0/0, ticks=2278/0, in_queue=2278, util=96.03% 00:18:50.539 nvme0n4: ios=395/0, merge=0/0, ticks=2291/0, in_queue=2291, util=96.06% 00:18:50.539 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.539 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:50.799 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.799 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:50.799 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.799 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:51.060 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.060 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2580570 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:51.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:51.321 nvmf hotplug test: fio failed as expected 00:18:51.321 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.582 08:57:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.582 rmmod nvme_tcp 00:18:51.582 rmmod nvme_fabrics 00:18:51.582 rmmod nvme_keyring 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2576956 ']' 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2576956 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 2576956 ']' 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 2576956 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2576956 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2576956' 00:18:51.582 killing process with pid 2576956 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 2576956 00:18:51.582 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 2576956 00:18:51.844 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.844 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.844 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.844 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.844 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.844 08:57:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.844 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.844 08:57:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.815 08:57:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.815 00:18:53.815 real 0m28.360s 00:18:53.815 user 2m30.407s 00:18:53.815 sys 0m9.186s 00:18:53.815 08:57:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:53.815 08:57:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.815 ************************************ 00:18:53.815 END TEST nvmf_fio_target 00:18:53.815 ************************************ 00:18:53.815 08:57:16 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:53.815 08:57:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:53.815 08:57:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:53.815 08:57:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:53.815 ************************************ 00:18:53.815 START TEST nvmf_bdevio 00:18:53.815 ************************************ 00:18:53.815 08:57:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:54.077 * Looking for test storage... 00:18:54.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.077 08:57:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.669 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:00.670 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:00.670 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:00.670 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:00.670 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.670 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.930 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.930 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.930 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:00.930 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.930 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.930 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:01.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:19:01.191 00:19:01.191 --- 10.0.0.2 ping statistics --- 00:19:01.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.191 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms 00:19:01.191 00:19:01.191 --- 10.0.0.1 ping statistics --- 00:19:01.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.191 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2586241 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2586241 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 2586241 ']' 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:01.191 08:57:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:01.191 [2024-06-09 08:57:23.625408] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:19:01.191 [2024-06-09 08:57:23.625471] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.191 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.191 [2024-06-09 08:57:23.711125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.452 [2024-06-09 08:57:23.806365] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.452 [2024-06-09 08:57:23.806432] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.452 [2024-06-09 08:57:23.806441] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.452 [2024-06-09 08:57:23.806448] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.452 [2024-06-09 08:57:23.806460] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.452 [2024-06-09 08:57:23.806628] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:19:01.452 [2024-06-09 08:57:23.806945] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:19:01.452 [2024-06-09 08:57:23.807106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:19:01.452 [2024-06-09 08:57:23.807109] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.024 [2024-06-09 08:57:24.452388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.024 Malloc0 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.024 [2024-06-09 08:57:24.517804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:02.024 { 00:19:02.024 "params": { 00:19:02.024 "name": "Nvme$subsystem", 00:19:02.024 "trtype": "$TEST_TRANSPORT", 00:19:02.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.024 "adrfam": "ipv4", 00:19:02.024 "trsvcid": "$NVMF_PORT", 00:19:02.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.024 "hdgst": ${hdgst:-false}, 00:19:02.024 "ddgst": ${ddgst:-false} 00:19:02.024 }, 00:19:02.024 "method": "bdev_nvme_attach_controller" 00:19:02.024 } 00:19:02.024 EOF 00:19:02.024 )") 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:02.024 08:57:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:02.024 "params": { 00:19:02.024 "name": "Nvme1", 00:19:02.024 "trtype": "tcp", 00:19:02.024 "traddr": "10.0.0.2", 00:19:02.024 "adrfam": "ipv4", 00:19:02.024 "trsvcid": "4420", 00:19:02.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.024 "hdgst": false, 00:19:02.024 "ddgst": false 00:19:02.024 }, 00:19:02.024 "method": "bdev_nvme_attach_controller" 00:19:02.024 }' 00:19:02.024 [2024-06-09 08:57:24.572301] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:19:02.025 [2024-06-09 08:57:24.572370] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586592 ] 00:19:02.286 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.286 [2024-06-09 08:57:24.638057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.286 [2024-06-09 08:57:24.713461] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.286 [2024-06-09 08:57:24.713745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.286 [2024-06-09 08:57:24.713749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.547 I/O targets: 00:19:02.547 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:02.547 00:19:02.547 00:19:02.547 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.547 http://cunit.sourceforge.net/ 00:19:02.547 00:19:02.547 00:19:02.547 Suite: bdevio tests on: Nvme1n1 00:19:02.547 Test: blockdev write read block ...passed 00:19:02.808 Test: blockdev write zeroes read block ...passed 00:19:02.808 Test: blockdev write zeroes read no split ...passed 00:19:02.808 Test: blockdev write zeroes read split ...passed 00:19:02.808 Test: blockdev write zeroes read split partial ...passed 00:19:02.808 Test: blockdev reset ...[2024-06-09 08:57:25.218551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.808 [2024-06-09 08:57:25.218621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ee400 (9): Bad file descriptor 00:19:02.808 [2024-06-09 08:57:25.328657] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:02.808 passed 00:19:03.069 Test: blockdev write read 8 blocks ...passed 00:19:03.069 Test: blockdev write read size > 128k ...passed 00:19:03.069 Test: blockdev write read invalid size ...passed 00:19:03.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.069 Test: blockdev write read max offset ...passed 00:19:03.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.069 Test: blockdev writev readv 8 blocks ...passed 00:19:03.069 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.069 Test: blockdev writev readv block ...passed 00:19:03.069 Test: blockdev writev readv size > 128k ...passed 00:19:03.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.330 Test: blockdev comparev and writev ...[2024-06-09 08:57:25.645941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.330 [2024-06-09 08:57:25.645967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.645978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.330 [2024-06-09 08:57:25.645984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.646664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.330 [2024-06-09 08:57:25.646672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.646682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.330 [2024-06-09 08:57:25.646687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.647368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.330 [2024-06-09 08:57:25.647376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.647386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.330 [2024-06-09 08:57:25.647391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.648096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.330 [2024-06-09 08:57:25.648106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.648115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.330 [2024-06-09 08:57:25.648121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:03.330 passed 00:19:03.330 Test: blockdev nvme passthru rw ...passed 00:19:03.330 Test: blockdev nvme passthru vendor specific ...[2024-06-09 08:57:25.732440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.330 [2024-06-09 08:57:25.732480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.733034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.330 [2024-06-09 08:57:25.733041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.733622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.330 [2024-06-09 08:57:25.733630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:03.330 [2024-06-09 08:57:25.734227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.330 [2024-06-09 08:57:25.734234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:03.330 passed 00:19:03.330 Test: blockdev nvme admin passthru ...passed 00:19:03.330 Test: blockdev copy ...passed 00:19:03.330 00:19:03.330 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.330 suites 1 1 n/a 0 0 00:19:03.330 tests 23 23 23 0 0 00:19:03.330 asserts 152 152 152 0 n/a 00:19:03.330 00:19:03.330 Elapsed time = 1.553 seconds 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.591 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.591 rmmod nvme_tcp 00:19:03.592 rmmod nvme_fabrics 00:19:03.592 rmmod nvme_keyring 00:19:03.592 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.592 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:03.592 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:03.592 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2586241 ']' 00:19:03.592 08:57:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2586241 00:19:03.592 08:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 2586241 ']' 00:19:03.592 08:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 2586241 00:19:03.592 08:57:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:19:03.592 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:03.592 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2586241 00:19:03.592 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:19:03.592 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:19:03.592 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2586241' 00:19:03.592 killing process with pid 2586241 00:19:03.592 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 2586241 00:19:03.592 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 2586241 00:19:03.853 08:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.853 08:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.853 08:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.853 08:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.853 08:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.853 08:57:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.853 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.853 08:57:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.767 08:57:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:05.767 00:19:05.767 real 0m11.912s 00:19:05.767 user 0m14.382s 00:19:05.767 sys 0m5.765s 00:19:05.767 08:57:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:05.767 08:57:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.767 ************************************ 00:19:05.767 END TEST nvmf_bdevio 00:19:05.767 ************************************ 00:19:05.767 08:57:28 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:05.767 08:57:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:05.767 08:57:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:05.767 08:57:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.029 ************************************ 00:19:06.029 START TEST nvmf_auth_target 00:19:06.029 ************************************ 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:06.029 * Looking for test storage... 00:19:06.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.029 08:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.030 08:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:14.178 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:14.178 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:14.178 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:14.178 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:19:14.178 00:19:14.178 --- 10.0.0.2 ping statistics --- 00:19:14.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.178 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:19:14.178 00:19:14.178 --- 10.0.0.1 ping statistics --- 00:19:14.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.178 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:19:14.178 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2590939 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2590939 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2590939 ']' 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:14.179 08:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2591148 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0c969e4e12b86c5b5dce18acd36deb49384afa240eee9220 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xaA 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0c969e4e12b86c5b5dce18acd36deb49384afa240eee9220 0 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0c969e4e12b86c5b5dce18acd36deb49384afa240eee9220 0 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0c969e4e12b86c5b5dce18acd36deb49384afa240eee9220 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xaA 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xaA 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.xaA 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b41a59b28dff3754c317aa35513c342ba0e09894e462091e9e68898bc66f7bcf 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8EN 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b41a59b28dff3754c317aa35513c342ba0e09894e462091e9e68898bc66f7bcf 3 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b41a59b28dff3754c317aa35513c342ba0e09894e462091e9e68898bc66f7bcf 3 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b41a59b28dff3754c317aa35513c342ba0e09894e462091e9e68898bc66f7bcf 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8EN 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8EN 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.8EN 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=393a2a7bbe3fe307c1c9d1e35bfabf2f 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.j6Z 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 393a2a7bbe3fe307c1c9d1e35bfabf2f 1 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 393a2a7bbe3fe307c1c9d1e35bfabf2f 1 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=393a2a7bbe3fe307c1c9d1e35bfabf2f 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.j6Z 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.j6Z 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.j6Z 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=be49ff2f2606eed499113aa87ecd755d96081b8d450204ca 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kMJ 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key be49ff2f2606eed499113aa87ecd755d96081b8d450204ca 2 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 be49ff2f2606eed499113aa87ecd755d96081b8d450204ca 2 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=be49ff2f2606eed499113aa87ecd755d96081b8d450204ca 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kMJ 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kMJ 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.kMJ 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:14.179 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c50b58a45e3e73d3b134e7320576d495793648ce2a6d2512 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qao 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c50b58a45e3e73d3b134e7320576d495793648ce2a6d2512 2 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c50b58a45e3e73d3b134e7320576d495793648ce2a6d2512 2 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c50b58a45e3e73d3b134e7320576d495793648ce2a6d2512 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qao 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qao 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Qao 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=199f71cf8b9dceb0de02ef393c9f060a 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ErU 00:19:14.180 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 199f71cf8b9dceb0de02ef393c9f060a 1 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 199f71cf8b9dceb0de02ef393c9f060a 1 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=199f71cf8b9dceb0de02ef393c9f060a 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ErU 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ErU 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ErU 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ef4384be17441130d64280d83f9d693f67cdd15dd0d3b6a58a5c77fc568025fd 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OgQ 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ef4384be17441130d64280d83f9d693f67cdd15dd0d3b6a58a5c77fc568025fd 3 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ef4384be17441130d64280d83f9d693f67cdd15dd0d3b6a58a5c77fc568025fd 3 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ef4384be17441130d64280d83f9d693f67cdd15dd0d3b6a58a5c77fc568025fd 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OgQ 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OgQ 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.OgQ 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2590939 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2590939 ']' 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:14.441 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.702 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:14.702 08:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:14.702 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2591148 /var/tmp/host.sock 00:19:14.702 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2591148 ']' 00:19:14.702 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:19:14.702 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:14.702 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:14.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xaA 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xaA 00:19:14.703 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xaA 00:19:14.964 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.8EN ]] 00:19:14.964 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8EN 00:19:14.964 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.964 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.964 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.964 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8EN 00:19:14.964 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8EN 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.j6Z 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.j6Z 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.j6Z 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.kMJ ]] 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kMJ 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kMJ 00:19:15.226 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kMJ 00:19:15.487 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:15.487 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Qao 00:19:15.487 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.487 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.487 08:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.487 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Qao 00:19:15.487 08:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Qao 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ErU ]] 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ErU 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ErU 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ErU 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.OgQ 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.OgQ 00:19:15.749 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.OgQ 00:19:16.010 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:16.010 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:16.010 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.010 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.010 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.010 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.272 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.272 00:19:16.533 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.533 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.533 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.533 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.533 08:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.533 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.533 08:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.533 08:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.533 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.533 { 00:19:16.533 "cntlid": 1, 00:19:16.533 "qid": 0, 00:19:16.533 "state": "enabled", 00:19:16.533 "listen_address": { 00:19:16.533 "trtype": "TCP", 00:19:16.533 "adrfam": "IPv4", 00:19:16.533 "traddr": "10.0.0.2", 00:19:16.533 "trsvcid": "4420" 00:19:16.533 }, 00:19:16.533 "peer_address": { 00:19:16.533 "trtype": "TCP", 00:19:16.533 "adrfam": "IPv4", 00:19:16.533 "traddr": "10.0.0.1", 00:19:16.533 "trsvcid": "41578" 00:19:16.533 }, 00:19:16.533 "auth": { 00:19:16.533 "state": "completed", 00:19:16.533 "digest": "sha256", 00:19:16.533 "dhgroup": "null" 00:19:16.533 } 00:19:16.533 } 00:19:16.533 ]' 00:19:16.533 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.533 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.533 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.533 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:16.533 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.794 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.794 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.794 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.794 08:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.787 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.049 00:19:18.049 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.049 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.049 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.310 { 00:19:18.310 "cntlid": 3, 00:19:18.310 "qid": 0, 00:19:18.310 "state": "enabled", 00:19:18.310 "listen_address": { 00:19:18.310 "trtype": "TCP", 00:19:18.310 "adrfam": "IPv4", 00:19:18.310 "traddr": "10.0.0.2", 00:19:18.310 "trsvcid": "4420" 00:19:18.310 }, 00:19:18.310 "peer_address": { 00:19:18.310 "trtype": "TCP", 00:19:18.310 "adrfam": "IPv4", 00:19:18.310 "traddr": "10.0.0.1", 00:19:18.310 "trsvcid": "41604" 00:19:18.310 }, 00:19:18.310 "auth": { 00:19:18.310 "state": "completed", 00:19:18.310 "digest": "sha256", 00:19:18.310 "dhgroup": "null" 00:19:18.310 } 00:19:18.310 } 00:19:18.310 ]' 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.310 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.571 08:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:19:19.142 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.403 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.403 08:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.403 08:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.403 08:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.403 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.403 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.404 08:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.665 00:19:19.665 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.665 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.665 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.926 { 00:19:19.926 "cntlid": 5, 00:19:19.926 "qid": 0, 00:19:19.926 "state": "enabled", 00:19:19.926 "listen_address": { 00:19:19.926 "trtype": "TCP", 00:19:19.926 "adrfam": "IPv4", 00:19:19.926 "traddr": "10.0.0.2", 00:19:19.926 "trsvcid": "4420" 00:19:19.926 }, 00:19:19.926 "peer_address": { 00:19:19.926 "trtype": "TCP", 00:19:19.926 "adrfam": "IPv4", 00:19:19.926 "traddr": "10.0.0.1", 00:19:19.926 "trsvcid": "41636" 00:19:19.926 }, 00:19:19.926 "auth": { 00:19:19.926 "state": "completed", 00:19:19.926 "digest": "sha256", 00:19:19.926 "dhgroup": "null" 00:19:19.926 } 00:19:19.926 } 00:19:19.926 ]' 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.926 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.188 08:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.129 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.391 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.391 { 00:19:21.391 "cntlid": 7, 00:19:21.391 "qid": 0, 00:19:21.391 "state": "enabled", 00:19:21.391 "listen_address": { 00:19:21.391 "trtype": "TCP", 00:19:21.391 "adrfam": "IPv4", 00:19:21.391 "traddr": "10.0.0.2", 00:19:21.391 "trsvcid": "4420" 00:19:21.391 }, 00:19:21.391 "peer_address": { 00:19:21.391 "trtype": "TCP", 00:19:21.391 "adrfam": "IPv4", 00:19:21.391 "traddr": "10.0.0.1", 00:19:21.391 "trsvcid": "45652" 00:19:21.391 }, 00:19:21.391 "auth": { 00:19:21.391 "state": "completed", 00:19:21.391 "digest": "sha256", 00:19:21.391 "dhgroup": "null" 00:19:21.391 } 00:19:21.391 } 00:19:21.391 ]' 00:19:21.391 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.652 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.652 08:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.652 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:21.652 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.652 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.652 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.652 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.913 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:19:22.483 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.483 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.483 08:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.483 08:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.483 08:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.483 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.483 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.484 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.484 08:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.744 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.005 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.005 { 00:19:23.005 "cntlid": 9, 00:19:23.005 "qid": 0, 00:19:23.005 "state": "enabled", 00:19:23.005 "listen_address": { 00:19:23.005 "trtype": "TCP", 00:19:23.005 "adrfam": "IPv4", 00:19:23.005 "traddr": "10.0.0.2", 00:19:23.005 "trsvcid": "4420" 00:19:23.005 }, 00:19:23.005 "peer_address": { 00:19:23.005 "trtype": "TCP", 00:19:23.005 "adrfam": "IPv4", 00:19:23.005 "traddr": "10.0.0.1", 00:19:23.005 "trsvcid": "45682" 00:19:23.005 }, 00:19:23.005 "auth": { 00:19:23.005 "state": "completed", 00:19:23.005 "digest": "sha256", 00:19:23.005 "dhgroup": "ffdhe2048" 00:19:23.005 } 00:19:23.005 } 00:19:23.005 ]' 00:19:23.005 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.265 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.265 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.265 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.265 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.265 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.265 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.265 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.265 08:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.207 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.468 00:19:24.468 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.468 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.468 08:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.728 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.728 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.728 08:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.728 08:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.728 08:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.728 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.728 { 00:19:24.728 "cntlid": 11, 00:19:24.728 "qid": 0, 00:19:24.728 "state": "enabled", 00:19:24.729 "listen_address": { 00:19:24.729 "trtype": "TCP", 00:19:24.729 "adrfam": "IPv4", 00:19:24.729 "traddr": "10.0.0.2", 00:19:24.729 "trsvcid": "4420" 00:19:24.729 }, 00:19:24.729 "peer_address": { 00:19:24.729 "trtype": "TCP", 00:19:24.729 "adrfam": "IPv4", 00:19:24.729 "traddr": "10.0.0.1", 00:19:24.729 "trsvcid": "45724" 00:19:24.729 }, 00:19:24.729 "auth": { 00:19:24.729 "state": "completed", 00:19:24.729 "digest": "sha256", 00:19:24.729 "dhgroup": "ffdhe2048" 00:19:24.729 } 00:19:24.729 } 00:19:24.729 ]' 00:19:24.729 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.729 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.729 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.729 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.729 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.729 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.729 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.729 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.989 08:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.931 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.192 00:19:26.192 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.192 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.192 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.452 { 00:19:26.452 "cntlid": 13, 00:19:26.452 "qid": 0, 00:19:26.452 "state": "enabled", 00:19:26.452 "listen_address": { 00:19:26.452 "trtype": "TCP", 00:19:26.452 "adrfam": "IPv4", 00:19:26.452 "traddr": "10.0.0.2", 00:19:26.452 "trsvcid": "4420" 00:19:26.452 }, 00:19:26.452 "peer_address": { 00:19:26.452 "trtype": "TCP", 00:19:26.452 "adrfam": "IPv4", 00:19:26.452 "traddr": "10.0.0.1", 00:19:26.452 "trsvcid": "45756" 00:19:26.452 }, 00:19:26.452 "auth": { 00:19:26.452 "state": "completed", 00:19:26.452 "digest": "sha256", 00:19:26.452 "dhgroup": "ffdhe2048" 00:19:26.452 } 00:19:26.452 } 00:19:26.452 ]' 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.452 08:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.713 08:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:19:27.284 08:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.284 08:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.284 08:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.284 08:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.544 08:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.544 08:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.544 08:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.544 08:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.544 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.545 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.805 00:19:27.805 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.805 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.805 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.066 { 00:19:28.066 "cntlid": 15, 00:19:28.066 "qid": 0, 00:19:28.066 "state": "enabled", 00:19:28.066 "listen_address": { 00:19:28.066 "trtype": "TCP", 00:19:28.066 "adrfam": "IPv4", 00:19:28.066 "traddr": "10.0.0.2", 00:19:28.066 "trsvcid": "4420" 00:19:28.066 }, 00:19:28.066 "peer_address": { 00:19:28.066 "trtype": "TCP", 00:19:28.066 "adrfam": "IPv4", 00:19:28.066 "traddr": "10.0.0.1", 00:19:28.066 "trsvcid": "45780" 00:19:28.066 }, 00:19:28.066 "auth": { 00:19:28.066 "state": "completed", 00:19:28.066 "digest": "sha256", 00:19:28.066 "dhgroup": "ffdhe2048" 00:19:28.066 } 00:19:28.066 } 00:19:28.066 ]' 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.066 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.327 08:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:19:28.898 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.159 08:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.160 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.160 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.420 00:19:29.420 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.420 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.420 08:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.680 { 00:19:29.680 "cntlid": 17, 00:19:29.680 "qid": 0, 00:19:29.680 "state": "enabled", 00:19:29.680 "listen_address": { 00:19:29.680 "trtype": "TCP", 00:19:29.680 "adrfam": "IPv4", 00:19:29.680 "traddr": "10.0.0.2", 00:19:29.680 "trsvcid": "4420" 00:19:29.680 }, 00:19:29.680 "peer_address": { 00:19:29.680 "trtype": "TCP", 00:19:29.680 "adrfam": "IPv4", 00:19:29.680 "traddr": "10.0.0.1", 00:19:29.680 "trsvcid": "45806" 00:19:29.680 }, 00:19:29.680 "auth": { 00:19:29.680 "state": "completed", 00:19:29.680 "digest": "sha256", 00:19:29.680 "dhgroup": "ffdhe3072" 00:19:29.680 } 00:19:29.680 } 00:19:29.680 ]' 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.680 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.940 08:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.882 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.143 00:19:31.143 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.143 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.143 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.405 { 00:19:31.405 "cntlid": 19, 00:19:31.405 "qid": 0, 00:19:31.405 "state": "enabled", 00:19:31.405 "listen_address": { 00:19:31.405 "trtype": "TCP", 00:19:31.405 "adrfam": "IPv4", 00:19:31.405 "traddr": "10.0.0.2", 00:19:31.405 "trsvcid": "4420" 00:19:31.405 }, 00:19:31.405 "peer_address": { 00:19:31.405 "trtype": "TCP", 00:19:31.405 "adrfam": "IPv4", 00:19:31.405 "traddr": "10.0.0.1", 00:19:31.405 "trsvcid": "51010" 00:19:31.405 }, 00:19:31.405 "auth": { 00:19:31.405 "state": "completed", 00:19:31.405 "digest": "sha256", 00:19:31.405 "dhgroup": "ffdhe3072" 00:19:31.405 } 00:19:31.405 } 00:19:31.405 ]' 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.405 08:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.666 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:19:32.264 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.264 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.264 08:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.264 08:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.264 08:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.264 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.264 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.264 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.525 08:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.787 00:19:32.787 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.787 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.787 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.787 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.049 { 00:19:33.049 "cntlid": 21, 00:19:33.049 "qid": 0, 00:19:33.049 "state": "enabled", 00:19:33.049 "listen_address": { 00:19:33.049 "trtype": "TCP", 00:19:33.049 "adrfam": "IPv4", 00:19:33.049 "traddr": "10.0.0.2", 00:19:33.049 "trsvcid": "4420" 00:19:33.049 }, 00:19:33.049 "peer_address": { 00:19:33.049 "trtype": "TCP", 00:19:33.049 "adrfam": "IPv4", 00:19:33.049 "traddr": "10.0.0.1", 00:19:33.049 "trsvcid": "51048" 00:19:33.049 }, 00:19:33.049 "auth": { 00:19:33.049 "state": "completed", 00:19:33.049 "digest": "sha256", 00:19:33.049 "dhgroup": "ffdhe3072" 00:19:33.049 } 00:19:33.049 } 00:19:33.049 ]' 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.049 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.311 08:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:19:33.883 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.883 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.883 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.883 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.883 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.883 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.883 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.883 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.144 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:34.144 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.144 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.144 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:34.144 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.145 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.145 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:34.145 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.145 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.145 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.145 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.145 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.405 00:19:34.405 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.405 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.405 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.667 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.667 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.667 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.667 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.667 08:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.667 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.667 { 00:19:34.667 "cntlid": 23, 00:19:34.667 "qid": 0, 00:19:34.667 "state": "enabled", 00:19:34.667 "listen_address": { 00:19:34.667 "trtype": "TCP", 00:19:34.667 "adrfam": "IPv4", 00:19:34.667 "traddr": "10.0.0.2", 00:19:34.667 "trsvcid": "4420" 00:19:34.667 }, 00:19:34.667 "peer_address": { 00:19:34.667 "trtype": "TCP", 00:19:34.667 "adrfam": "IPv4", 00:19:34.667 "traddr": "10.0.0.1", 00:19:34.667 "trsvcid": "51070" 00:19:34.667 }, 00:19:34.667 "auth": { 00:19:34.667 "state": "completed", 00:19:34.667 "digest": "sha256", 00:19:34.667 "dhgroup": "ffdhe3072" 00:19:34.667 } 00:19:34.667 } 00:19:34.667 ]' 00:19:34.667 08:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.667 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.667 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.667 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.667 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.667 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.667 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.667 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.928 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:19:35.500 08:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.500 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.500 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.500 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.500 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.500 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.500 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.500 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.500 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.761 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.021 00:19:36.021 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.021 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.021 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.282 { 00:19:36.282 "cntlid": 25, 00:19:36.282 "qid": 0, 00:19:36.282 "state": "enabled", 00:19:36.282 "listen_address": { 00:19:36.282 "trtype": "TCP", 00:19:36.282 "adrfam": "IPv4", 00:19:36.282 "traddr": "10.0.0.2", 00:19:36.282 "trsvcid": "4420" 00:19:36.282 }, 00:19:36.282 "peer_address": { 00:19:36.282 "trtype": "TCP", 00:19:36.282 "adrfam": "IPv4", 00:19:36.282 "traddr": "10.0.0.1", 00:19:36.282 "trsvcid": "51098" 00:19:36.282 }, 00:19:36.282 "auth": { 00:19:36.282 "state": "completed", 00:19:36.282 "digest": "sha256", 00:19:36.282 "dhgroup": "ffdhe4096" 00:19:36.282 } 00:19:36.282 } 00:19:36.282 ]' 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.282 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.543 08:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.484 08:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.745 00:19:37.745 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.745 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.745 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.006 { 00:19:38.006 "cntlid": 27, 00:19:38.006 "qid": 0, 00:19:38.006 "state": "enabled", 00:19:38.006 "listen_address": { 00:19:38.006 "trtype": "TCP", 00:19:38.006 "adrfam": "IPv4", 00:19:38.006 "traddr": "10.0.0.2", 00:19:38.006 "trsvcid": "4420" 00:19:38.006 }, 00:19:38.006 "peer_address": { 00:19:38.006 "trtype": "TCP", 00:19:38.006 "adrfam": "IPv4", 00:19:38.006 "traddr": "10.0.0.1", 00:19:38.006 "trsvcid": "51130" 00:19:38.006 }, 00:19:38.006 "auth": { 00:19:38.006 "state": "completed", 00:19:38.006 "digest": "sha256", 00:19:38.006 "dhgroup": "ffdhe4096" 00:19:38.006 } 00:19:38.006 } 00:19:38.006 ]' 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.006 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.267 08:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:19:38.890 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.890 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.890 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.890 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.890 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.890 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.890 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:38.890 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.150 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.151 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.151 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.151 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.411 00:19:39.411 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.411 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.411 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.411 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.411 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.411 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.411 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.672 08:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.672 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.672 { 00:19:39.672 "cntlid": 29, 00:19:39.672 "qid": 0, 00:19:39.672 "state": "enabled", 00:19:39.672 "listen_address": { 00:19:39.672 "trtype": "TCP", 00:19:39.672 "adrfam": "IPv4", 00:19:39.672 "traddr": "10.0.0.2", 00:19:39.672 "trsvcid": "4420" 00:19:39.672 }, 00:19:39.672 "peer_address": { 00:19:39.672 "trtype": "TCP", 00:19:39.672 "adrfam": "IPv4", 00:19:39.672 "traddr": "10.0.0.1", 00:19:39.672 "trsvcid": "51156" 00:19:39.672 }, 00:19:39.672 "auth": { 00:19:39.672 "state": "completed", 00:19:39.672 "digest": "sha256", 00:19:39.672 "dhgroup": "ffdhe4096" 00:19:39.672 } 00:19:39.672 } 00:19:39.672 ]' 00:19:39.672 08:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.672 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.672 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.672 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:39.672 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.672 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.672 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.672 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.933 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:19:40.505 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.505 08:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.505 08:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.505 08:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.505 08:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.505 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.505 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.505 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.765 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.026 00:19:41.026 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.026 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.026 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.287 { 00:19:41.287 "cntlid": 31, 00:19:41.287 "qid": 0, 00:19:41.287 "state": "enabled", 00:19:41.287 "listen_address": { 00:19:41.287 "trtype": "TCP", 00:19:41.287 "adrfam": "IPv4", 00:19:41.287 "traddr": "10.0.0.2", 00:19:41.287 "trsvcid": "4420" 00:19:41.287 }, 00:19:41.287 "peer_address": { 00:19:41.287 "trtype": "TCP", 00:19:41.287 "adrfam": "IPv4", 00:19:41.287 "traddr": "10.0.0.1", 00:19:41.287 "trsvcid": "59850" 00:19:41.287 }, 00:19:41.287 "auth": { 00:19:41.287 "state": "completed", 00:19:41.287 "digest": "sha256", 00:19:41.287 "dhgroup": "ffdhe4096" 00:19:41.287 } 00:19:41.287 } 00:19:41.287 ]' 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.287 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.548 08:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.120 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.380 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:42.380 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.380 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.380 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:42.380 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.380 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.381 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.381 08:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.381 08:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.381 08:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.381 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.381 08:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.641 00:19:42.641 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.641 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.641 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.902 { 00:19:42.902 "cntlid": 33, 00:19:42.902 "qid": 0, 00:19:42.902 "state": "enabled", 00:19:42.902 "listen_address": { 00:19:42.902 "trtype": "TCP", 00:19:42.902 "adrfam": "IPv4", 00:19:42.902 "traddr": "10.0.0.2", 00:19:42.902 "trsvcid": "4420" 00:19:42.902 }, 00:19:42.902 "peer_address": { 00:19:42.902 "trtype": "TCP", 00:19:42.902 "adrfam": "IPv4", 00:19:42.902 "traddr": "10.0.0.1", 00:19:42.902 "trsvcid": "59876" 00:19:42.902 }, 00:19:42.902 "auth": { 00:19:42.902 "state": "completed", 00:19:42.902 "digest": "sha256", 00:19:42.902 "dhgroup": "ffdhe6144" 00:19:42.902 } 00:19:42.902 } 00:19:42.902 ]' 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.902 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.164 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.164 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.164 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.164 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.164 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.164 08:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.108 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.678 00:19:44.678 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.678 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.678 08:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.678 { 00:19:44.678 "cntlid": 35, 00:19:44.678 "qid": 0, 00:19:44.678 "state": "enabled", 00:19:44.678 "listen_address": { 00:19:44.678 "trtype": "TCP", 00:19:44.678 "adrfam": "IPv4", 00:19:44.678 "traddr": "10.0.0.2", 00:19:44.678 "trsvcid": "4420" 00:19:44.678 }, 00:19:44.678 "peer_address": { 00:19:44.678 "trtype": "TCP", 00:19:44.678 "adrfam": "IPv4", 00:19:44.678 "traddr": "10.0.0.1", 00:19:44.678 "trsvcid": "59898" 00:19:44.678 }, 00:19:44.678 "auth": { 00:19:44.678 "state": "completed", 00:19:44.678 "digest": "sha256", 00:19:44.678 "dhgroup": "ffdhe6144" 00:19:44.678 } 00:19:44.678 } 00:19:44.678 ]' 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.678 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.938 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.938 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.938 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.938 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.938 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.938 08:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:19:45.879 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.879 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.879 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.880 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.452 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.452 { 00:19:46.452 "cntlid": 37, 00:19:46.452 "qid": 0, 00:19:46.452 "state": "enabled", 00:19:46.452 "listen_address": { 00:19:46.452 "trtype": "TCP", 00:19:46.452 "adrfam": "IPv4", 00:19:46.452 "traddr": "10.0.0.2", 00:19:46.452 "trsvcid": "4420" 00:19:46.452 }, 00:19:46.452 "peer_address": { 00:19:46.452 "trtype": "TCP", 00:19:46.452 "adrfam": "IPv4", 00:19:46.452 "traddr": "10.0.0.1", 00:19:46.452 "trsvcid": "59928" 00:19:46.452 }, 00:19:46.452 "auth": { 00:19:46.452 "state": "completed", 00:19:46.452 "digest": "sha256", 00:19:46.452 "dhgroup": "ffdhe6144" 00:19:46.452 } 00:19:46.452 } 00:19:46.452 ]' 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:46.452 08:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.756 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.756 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.756 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.756 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:19:47.710 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.711 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.711 08:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.711 08:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.711 08:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.711 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.711 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.711 08:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.711 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.971 00:19:47.971 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.971 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.971 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.232 { 00:19:48.232 "cntlid": 39, 00:19:48.232 "qid": 0, 00:19:48.232 "state": "enabled", 00:19:48.232 "listen_address": { 00:19:48.232 "trtype": "TCP", 00:19:48.232 "adrfam": "IPv4", 00:19:48.232 "traddr": "10.0.0.2", 00:19:48.232 "trsvcid": "4420" 00:19:48.232 }, 00:19:48.232 "peer_address": { 00:19:48.232 "trtype": "TCP", 00:19:48.232 "adrfam": "IPv4", 00:19:48.232 "traddr": "10.0.0.1", 00:19:48.232 "trsvcid": "59974" 00:19:48.232 }, 00:19:48.232 "auth": { 00:19:48.232 "state": "completed", 00:19:48.232 "digest": "sha256", 00:19:48.232 "dhgroup": "ffdhe6144" 00:19:48.232 } 00:19:48.232 } 00:19:48.232 ]' 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.232 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.492 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.492 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.492 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.493 08:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.436 08:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.008 00:19:50.008 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.008 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.008 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.269 { 00:19:50.269 "cntlid": 41, 00:19:50.269 "qid": 0, 00:19:50.269 "state": "enabled", 00:19:50.269 "listen_address": { 00:19:50.269 "trtype": "TCP", 00:19:50.269 "adrfam": "IPv4", 00:19:50.269 "traddr": "10.0.0.2", 00:19:50.269 "trsvcid": "4420" 00:19:50.269 }, 00:19:50.269 "peer_address": { 00:19:50.269 "trtype": "TCP", 00:19:50.269 "adrfam": "IPv4", 00:19:50.269 "traddr": "10.0.0.1", 00:19:50.269 "trsvcid": "60008" 00:19:50.269 }, 00:19:50.269 "auth": { 00:19:50.269 "state": "completed", 00:19:50.269 "digest": "sha256", 00:19:50.269 "dhgroup": "ffdhe8192" 00:19:50.269 } 00:19:50.269 } 00:19:50.269 ]' 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.269 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.529 08:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:19:51.099 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.099 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.099 08:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.099 08:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.359 08:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.360 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.360 08:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.930 00:19:51.930 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.930 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.930 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.190 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.190 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.190 08:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.190 08:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 08:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.190 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.190 { 00:19:52.190 "cntlid": 43, 00:19:52.190 "qid": 0, 00:19:52.190 "state": "enabled", 00:19:52.190 "listen_address": { 00:19:52.190 "trtype": "TCP", 00:19:52.190 "adrfam": "IPv4", 00:19:52.190 "traddr": "10.0.0.2", 00:19:52.190 "trsvcid": "4420" 00:19:52.190 }, 00:19:52.190 "peer_address": { 00:19:52.190 "trtype": "TCP", 00:19:52.190 "adrfam": "IPv4", 00:19:52.190 "traddr": "10.0.0.1", 00:19:52.190 "trsvcid": "47978" 00:19:52.190 }, 00:19:52.190 "auth": { 00:19:52.191 "state": "completed", 00:19:52.191 "digest": "sha256", 00:19:52.191 "dhgroup": "ffdhe8192" 00:19:52.191 } 00:19:52.191 } 00:19:52.191 ]' 00:19:52.191 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.191 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.191 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.191 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.191 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.191 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.191 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.191 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.455 08:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:19:53.028 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.289 08:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.861 00:19:53.861 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.861 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.861 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.861 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.861 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.861 08:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.861 08:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.122 { 00:19:54.122 "cntlid": 45, 00:19:54.122 "qid": 0, 00:19:54.122 "state": "enabled", 00:19:54.122 "listen_address": { 00:19:54.122 "trtype": "TCP", 00:19:54.122 "adrfam": "IPv4", 00:19:54.122 "traddr": "10.0.0.2", 00:19:54.122 "trsvcid": "4420" 00:19:54.122 }, 00:19:54.122 "peer_address": { 00:19:54.122 "trtype": "TCP", 00:19:54.122 "adrfam": "IPv4", 00:19:54.122 "traddr": "10.0.0.1", 00:19:54.122 "trsvcid": "48012" 00:19:54.122 }, 00:19:54.122 "auth": { 00:19:54.122 "state": "completed", 00:19:54.122 "digest": "sha256", 00:19:54.122 "dhgroup": "ffdhe8192" 00:19:54.122 } 00:19:54.122 } 00:19:54.122 ]' 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.122 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.383 08:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:19:54.957 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.957 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.957 08:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.957 08:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.957 08:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.957 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.957 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.957 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.218 08:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.790 00:19:55.790 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.790 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.790 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.052 { 00:19:56.052 "cntlid": 47, 00:19:56.052 "qid": 0, 00:19:56.052 "state": "enabled", 00:19:56.052 "listen_address": { 00:19:56.052 "trtype": "TCP", 00:19:56.052 "adrfam": "IPv4", 00:19:56.052 "traddr": "10.0.0.2", 00:19:56.052 "trsvcid": "4420" 00:19:56.052 }, 00:19:56.052 "peer_address": { 00:19:56.052 "trtype": "TCP", 00:19:56.052 "adrfam": "IPv4", 00:19:56.052 "traddr": "10.0.0.1", 00:19:56.052 "trsvcid": "48042" 00:19:56.052 }, 00:19:56.052 "auth": { 00:19:56.052 "state": "completed", 00:19:56.052 "digest": "sha256", 00:19:56.052 "dhgroup": "ffdhe8192" 00:19:56.052 } 00:19:56.052 } 00:19:56.052 ]' 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.052 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.313 08:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:19:56.882 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.882 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.882 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.883 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.883 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.883 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:56.883 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.883 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.883 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:56.883 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.143 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.403 00:19:57.403 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.403 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.403 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.664 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.664 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.664 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.664 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.664 08:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.664 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.664 { 00:19:57.664 "cntlid": 49, 00:19:57.664 "qid": 0, 00:19:57.664 "state": "enabled", 00:19:57.664 "listen_address": { 00:19:57.664 "trtype": "TCP", 00:19:57.664 "adrfam": "IPv4", 00:19:57.664 "traddr": "10.0.0.2", 00:19:57.664 "trsvcid": "4420" 00:19:57.664 }, 00:19:57.664 "peer_address": { 00:19:57.664 "trtype": "TCP", 00:19:57.664 "adrfam": "IPv4", 00:19:57.664 "traddr": "10.0.0.1", 00:19:57.664 "trsvcid": "48062" 00:19:57.664 }, 00:19:57.664 "auth": { 00:19:57.664 "state": "completed", 00:19:57.664 "digest": "sha384", 00:19:57.664 "dhgroup": "null" 00:19:57.664 } 00:19:57.664 } 00:19:57.664 ]' 00:19:57.664 08:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.664 08:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.664 08:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.664 08:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:57.664 08:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.664 08:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.664 08:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.664 08:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.925 08:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:19:58.497 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.497 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.497 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.497 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.497 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.497 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.497 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.497 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.759 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.020 00:19:59.020 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.020 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.020 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.281 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.282 { 00:19:59.282 "cntlid": 51, 00:19:59.282 "qid": 0, 00:19:59.282 "state": "enabled", 00:19:59.282 "listen_address": { 00:19:59.282 "trtype": "TCP", 00:19:59.282 "adrfam": "IPv4", 00:19:59.282 "traddr": "10.0.0.2", 00:19:59.282 "trsvcid": "4420" 00:19:59.282 }, 00:19:59.282 "peer_address": { 00:19:59.282 "trtype": "TCP", 00:19:59.282 "adrfam": "IPv4", 00:19:59.282 "traddr": "10.0.0.1", 00:19:59.282 "trsvcid": "48086" 00:19:59.282 }, 00:19:59.282 "auth": { 00:19:59.282 "state": "completed", 00:19:59.282 "digest": "sha384", 00:19:59.282 "dhgroup": "null" 00:19:59.282 } 00:19:59.282 } 00:19:59.282 ]' 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.282 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.543 08:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:00.115 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.115 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.115 08:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.115 08:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.115 08:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.115 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.115 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.115 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.376 08:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.638 00:20:00.638 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.638 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.638 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.900 { 00:20:00.900 "cntlid": 53, 00:20:00.900 "qid": 0, 00:20:00.900 "state": "enabled", 00:20:00.900 "listen_address": { 00:20:00.900 "trtype": "TCP", 00:20:00.900 "adrfam": "IPv4", 00:20:00.900 "traddr": "10.0.0.2", 00:20:00.900 "trsvcid": "4420" 00:20:00.900 }, 00:20:00.900 "peer_address": { 00:20:00.900 "trtype": "TCP", 00:20:00.900 "adrfam": "IPv4", 00:20:00.900 "traddr": "10.0.0.1", 00:20:00.900 "trsvcid": "48864" 00:20:00.900 }, 00:20:00.900 "auth": { 00:20:00.900 "state": "completed", 00:20:00.900 "digest": "sha384", 00:20:00.900 "dhgroup": "null" 00:20:00.900 } 00:20:00.900 } 00:20:00.900 ]' 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.900 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.161 08:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:01.794 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.794 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.794 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.794 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.794 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.794 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.794 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.794 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.055 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.316 00:20:02.316 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.316 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.316 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.316 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.316 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.316 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.316 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.577 08:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.577 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.577 { 00:20:02.577 "cntlid": 55, 00:20:02.577 "qid": 0, 00:20:02.577 "state": "enabled", 00:20:02.577 "listen_address": { 00:20:02.577 "trtype": "TCP", 00:20:02.577 "adrfam": "IPv4", 00:20:02.577 "traddr": "10.0.0.2", 00:20:02.577 "trsvcid": "4420" 00:20:02.577 }, 00:20:02.577 "peer_address": { 00:20:02.577 "trtype": "TCP", 00:20:02.577 "adrfam": "IPv4", 00:20:02.577 "traddr": "10.0.0.1", 00:20:02.577 "trsvcid": "48900" 00:20:02.577 }, 00:20:02.577 "auth": { 00:20:02.577 "state": "completed", 00:20:02.577 "digest": "sha384", 00:20:02.577 "dhgroup": "null" 00:20:02.577 } 00:20:02.577 } 00:20:02.577 ]' 00:20:02.577 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.577 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.577 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.577 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:02.577 08:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.577 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.577 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.577 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.838 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.411 08:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.672 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.933 00:20:03.933 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.933 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.933 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.933 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.933 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.934 08:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.934 08:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.195 { 00:20:04.195 "cntlid": 57, 00:20:04.195 "qid": 0, 00:20:04.195 "state": "enabled", 00:20:04.195 "listen_address": { 00:20:04.195 "trtype": "TCP", 00:20:04.195 "adrfam": "IPv4", 00:20:04.195 "traddr": "10.0.0.2", 00:20:04.195 "trsvcid": "4420" 00:20:04.195 }, 00:20:04.195 "peer_address": { 00:20:04.195 "trtype": "TCP", 00:20:04.195 "adrfam": "IPv4", 00:20:04.195 "traddr": "10.0.0.1", 00:20:04.195 "trsvcid": "48908" 00:20:04.195 }, 00:20:04.195 "auth": { 00:20:04.195 "state": "completed", 00:20:04.195 "digest": "sha384", 00:20:04.195 "dhgroup": "ffdhe2048" 00:20:04.195 } 00:20:04.195 } 00:20:04.195 ]' 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.195 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.455 08:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:05.027 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.027 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.027 08:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.027 08:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.027 08:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.027 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.027 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.027 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.288 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.549 00:20:05.549 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.549 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.549 08:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.811 { 00:20:05.811 "cntlid": 59, 00:20:05.811 "qid": 0, 00:20:05.811 "state": "enabled", 00:20:05.811 "listen_address": { 00:20:05.811 "trtype": "TCP", 00:20:05.811 "adrfam": "IPv4", 00:20:05.811 "traddr": "10.0.0.2", 00:20:05.811 "trsvcid": "4420" 00:20:05.811 }, 00:20:05.811 "peer_address": { 00:20:05.811 "trtype": "TCP", 00:20:05.811 "adrfam": "IPv4", 00:20:05.811 "traddr": "10.0.0.1", 00:20:05.811 "trsvcid": "48942" 00:20:05.811 }, 00:20:05.811 "auth": { 00:20:05.811 "state": "completed", 00:20:05.811 "digest": "sha384", 00:20:05.811 "dhgroup": "ffdhe2048" 00:20:05.811 } 00:20:05.811 } 00:20:05.811 ]' 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.811 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.071 08:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:06.642 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.903 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.164 00:20:07.164 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.164 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.164 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.425 { 00:20:07.425 "cntlid": 61, 00:20:07.425 "qid": 0, 00:20:07.425 "state": "enabled", 00:20:07.425 "listen_address": { 00:20:07.425 "trtype": "TCP", 00:20:07.425 "adrfam": "IPv4", 00:20:07.425 "traddr": "10.0.0.2", 00:20:07.425 "trsvcid": "4420" 00:20:07.425 }, 00:20:07.425 "peer_address": { 00:20:07.425 "trtype": "TCP", 00:20:07.425 "adrfam": "IPv4", 00:20:07.425 "traddr": "10.0.0.1", 00:20:07.425 "trsvcid": "48972" 00:20:07.425 }, 00:20:07.425 "auth": { 00:20:07.425 "state": "completed", 00:20:07.425 "digest": "sha384", 00:20:07.425 "dhgroup": "ffdhe2048" 00:20:07.425 } 00:20:07.425 } 00:20:07.425 ]' 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.425 08:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.686 08:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:08.628 08:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.628 08:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.628 08:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.628 08:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.628 08:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.628 08:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.628 08:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.628 08:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.628 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.889 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.889 { 00:20:08.889 "cntlid": 63, 00:20:08.889 "qid": 0, 00:20:08.889 "state": "enabled", 00:20:08.889 "listen_address": { 00:20:08.889 "trtype": "TCP", 00:20:08.889 "adrfam": "IPv4", 00:20:08.889 "traddr": "10.0.0.2", 00:20:08.889 "trsvcid": "4420" 00:20:08.889 }, 00:20:08.889 "peer_address": { 00:20:08.889 "trtype": "TCP", 00:20:08.889 "adrfam": "IPv4", 00:20:08.889 "traddr": "10.0.0.1", 00:20:08.889 "trsvcid": "48998" 00:20:08.889 }, 00:20:08.889 "auth": { 00:20:08.889 "state": "completed", 00:20:08.889 "digest": "sha384", 00:20:08.889 "dhgroup": "ffdhe2048" 00:20:08.889 } 00:20:08.889 } 00:20:08.889 ]' 00:20:08.889 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.150 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.150 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.150 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.150 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.150 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.150 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.150 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.411 08:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.983 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.244 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.505 00:20:10.505 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.505 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.505 08:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.766 { 00:20:10.766 "cntlid": 65, 00:20:10.766 "qid": 0, 00:20:10.766 "state": "enabled", 00:20:10.766 "listen_address": { 00:20:10.766 "trtype": "TCP", 00:20:10.766 "adrfam": "IPv4", 00:20:10.766 "traddr": "10.0.0.2", 00:20:10.766 "trsvcid": "4420" 00:20:10.766 }, 00:20:10.766 "peer_address": { 00:20:10.766 "trtype": "TCP", 00:20:10.766 "adrfam": "IPv4", 00:20:10.766 "traddr": "10.0.0.1", 00:20:10.766 "trsvcid": "49024" 00:20:10.766 }, 00:20:10.766 "auth": { 00:20:10.766 "state": "completed", 00:20:10.766 "digest": "sha384", 00:20:10.766 "dhgroup": "ffdhe3072" 00:20:10.766 } 00:20:10.766 } 00:20:10.766 ]' 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.766 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.767 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.028 08:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:11.600 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.600 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.600 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.600 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.600 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.600 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.600 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.600 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.861 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:11.861 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.861 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.861 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:11.861 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.861 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.861 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.862 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.862 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.862 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.862 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.862 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.123 00:20:12.123 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.123 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.123 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.384 { 00:20:12.384 "cntlid": 67, 00:20:12.384 "qid": 0, 00:20:12.384 "state": "enabled", 00:20:12.384 "listen_address": { 00:20:12.384 "trtype": "TCP", 00:20:12.384 "adrfam": "IPv4", 00:20:12.384 "traddr": "10.0.0.2", 00:20:12.384 "trsvcid": "4420" 00:20:12.384 }, 00:20:12.384 "peer_address": { 00:20:12.384 "trtype": "TCP", 00:20:12.384 "adrfam": "IPv4", 00:20:12.384 "traddr": "10.0.0.1", 00:20:12.384 "trsvcid": "51436" 00:20:12.384 }, 00:20:12.384 "auth": { 00:20:12.384 "state": "completed", 00:20:12.384 "digest": "sha384", 00:20:12.384 "dhgroup": "ffdhe3072" 00:20:12.384 } 00:20:12.384 } 00:20:12.384 ]' 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.384 08:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.644 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:13.214 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.474 08:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.735 00:20:13.735 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.735 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.735 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.995 { 00:20:13.995 "cntlid": 69, 00:20:13.995 "qid": 0, 00:20:13.995 "state": "enabled", 00:20:13.995 "listen_address": { 00:20:13.995 "trtype": "TCP", 00:20:13.995 "adrfam": "IPv4", 00:20:13.995 "traddr": "10.0.0.2", 00:20:13.995 "trsvcid": "4420" 00:20:13.995 }, 00:20:13.995 "peer_address": { 00:20:13.995 "trtype": "TCP", 00:20:13.995 "adrfam": "IPv4", 00:20:13.995 "traddr": "10.0.0.1", 00:20:13.995 "trsvcid": "51462" 00:20:13.995 }, 00:20:13.995 "auth": { 00:20:13.995 "state": "completed", 00:20:13.995 "digest": "sha384", 00:20:13.995 "dhgroup": "ffdhe3072" 00:20:13.995 } 00:20:13.995 } 00:20:13.995 ]' 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.995 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.256 08:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.197 08:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 08:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.198 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.198 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.458 00:20:15.458 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.458 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.458 08:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.718 { 00:20:15.718 "cntlid": 71, 00:20:15.718 "qid": 0, 00:20:15.718 "state": "enabled", 00:20:15.718 "listen_address": { 00:20:15.718 "trtype": "TCP", 00:20:15.718 "adrfam": "IPv4", 00:20:15.718 "traddr": "10.0.0.2", 00:20:15.718 "trsvcid": "4420" 00:20:15.718 }, 00:20:15.718 "peer_address": { 00:20:15.718 "trtype": "TCP", 00:20:15.718 "adrfam": "IPv4", 00:20:15.718 "traddr": "10.0.0.1", 00:20:15.718 "trsvcid": "51494" 00:20:15.718 }, 00:20:15.718 "auth": { 00:20:15.718 "state": "completed", 00:20:15.718 "digest": "sha384", 00:20:15.718 "dhgroup": "ffdhe3072" 00:20:15.718 } 00:20:15.718 } 00:20:15.718 ]' 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.718 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.719 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.719 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.719 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.987 08:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.589 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.850 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.110 00:20:17.110 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.110 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.110 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.371 { 00:20:17.371 "cntlid": 73, 00:20:17.371 "qid": 0, 00:20:17.371 "state": "enabled", 00:20:17.371 "listen_address": { 00:20:17.371 "trtype": "TCP", 00:20:17.371 "adrfam": "IPv4", 00:20:17.371 "traddr": "10.0.0.2", 00:20:17.371 "trsvcid": "4420" 00:20:17.371 }, 00:20:17.371 "peer_address": { 00:20:17.371 "trtype": "TCP", 00:20:17.371 "adrfam": "IPv4", 00:20:17.371 "traddr": "10.0.0.1", 00:20:17.371 "trsvcid": "51514" 00:20:17.371 }, 00:20:17.371 "auth": { 00:20:17.371 "state": "completed", 00:20:17.371 "digest": "sha384", 00:20:17.371 "dhgroup": "ffdhe4096" 00:20:17.371 } 00:20:17.371 } 00:20:17.371 ]' 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.371 08:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.632 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:18.205 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.472 08:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.732 00:20:18.732 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.732 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.732 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.993 { 00:20:18.993 "cntlid": 75, 00:20:18.993 "qid": 0, 00:20:18.993 "state": "enabled", 00:20:18.993 "listen_address": { 00:20:18.993 "trtype": "TCP", 00:20:18.993 "adrfam": "IPv4", 00:20:18.993 "traddr": "10.0.0.2", 00:20:18.993 "trsvcid": "4420" 00:20:18.993 }, 00:20:18.993 "peer_address": { 00:20:18.993 "trtype": "TCP", 00:20:18.993 "adrfam": "IPv4", 00:20:18.993 "traddr": "10.0.0.1", 00:20:18.993 "trsvcid": "51544" 00:20:18.993 }, 00:20:18.993 "auth": { 00:20:18.993 "state": "completed", 00:20:18.993 "digest": "sha384", 00:20:18.993 "dhgroup": "ffdhe4096" 00:20:18.993 } 00:20:18.993 } 00:20:18.993 ]' 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.993 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.254 08:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.196 08:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.197 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.197 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.458 00:20:20.458 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.458 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.458 08:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.720 { 00:20:20.720 "cntlid": 77, 00:20:20.720 "qid": 0, 00:20:20.720 "state": "enabled", 00:20:20.720 "listen_address": { 00:20:20.720 "trtype": "TCP", 00:20:20.720 "adrfam": "IPv4", 00:20:20.720 "traddr": "10.0.0.2", 00:20:20.720 "trsvcid": "4420" 00:20:20.720 }, 00:20:20.720 "peer_address": { 00:20:20.720 "trtype": "TCP", 00:20:20.720 "adrfam": "IPv4", 00:20:20.720 "traddr": "10.0.0.1", 00:20:20.720 "trsvcid": "51554" 00:20:20.720 }, 00:20:20.720 "auth": { 00:20:20.720 "state": "completed", 00:20:20.720 "digest": "sha384", 00:20:20.720 "dhgroup": "ffdhe4096" 00:20:20.720 } 00:20:20.720 } 00:20:20.720 ]' 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.720 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.981 08:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:21.923 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.923 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.923 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.923 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.923 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.923 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.924 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.184 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.184 { 00:20:22.184 "cntlid": 79, 00:20:22.184 "qid": 0, 00:20:22.184 "state": "enabled", 00:20:22.184 "listen_address": { 00:20:22.184 "trtype": "TCP", 00:20:22.184 "adrfam": "IPv4", 00:20:22.184 "traddr": "10.0.0.2", 00:20:22.184 "trsvcid": "4420" 00:20:22.184 }, 00:20:22.184 "peer_address": { 00:20:22.184 "trtype": "TCP", 00:20:22.184 "adrfam": "IPv4", 00:20:22.184 "traddr": "10.0.0.1", 00:20:22.184 "trsvcid": "40192" 00:20:22.184 }, 00:20:22.184 "auth": { 00:20:22.184 "state": "completed", 00:20:22.184 "digest": "sha384", 00:20:22.184 "dhgroup": "ffdhe4096" 00:20:22.184 } 00:20:22.184 } 00:20:22.184 ]' 00:20:22.184 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.445 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.445 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.445 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.445 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.445 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.445 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.445 08:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.704 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:23.274 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.535 08:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.796 00:20:23.796 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.796 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.796 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.057 { 00:20:24.057 "cntlid": 81, 00:20:24.057 "qid": 0, 00:20:24.057 "state": "enabled", 00:20:24.057 "listen_address": { 00:20:24.057 "trtype": "TCP", 00:20:24.057 "adrfam": "IPv4", 00:20:24.057 "traddr": "10.0.0.2", 00:20:24.057 "trsvcid": "4420" 00:20:24.057 }, 00:20:24.057 "peer_address": { 00:20:24.057 "trtype": "TCP", 00:20:24.057 "adrfam": "IPv4", 00:20:24.057 "traddr": "10.0.0.1", 00:20:24.057 "trsvcid": "40226" 00:20:24.057 }, 00:20:24.057 "auth": { 00:20:24.057 "state": "completed", 00:20:24.057 "digest": "sha384", 00:20:24.057 "dhgroup": "ffdhe6144" 00:20:24.057 } 00:20:24.057 } 00:20:24.057 ]' 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.057 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.318 08:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.260 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.521 00:20:25.521 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.521 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.521 08:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.781 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.782 { 00:20:25.782 "cntlid": 83, 00:20:25.782 "qid": 0, 00:20:25.782 "state": "enabled", 00:20:25.782 "listen_address": { 00:20:25.782 "trtype": "TCP", 00:20:25.782 "adrfam": "IPv4", 00:20:25.782 "traddr": "10.0.0.2", 00:20:25.782 "trsvcid": "4420" 00:20:25.782 }, 00:20:25.782 "peer_address": { 00:20:25.782 "trtype": "TCP", 00:20:25.782 "adrfam": "IPv4", 00:20:25.782 "traddr": "10.0.0.1", 00:20:25.782 "trsvcid": "40248" 00:20:25.782 }, 00:20:25.782 "auth": { 00:20:25.782 "state": "completed", 00:20:25.782 "digest": "sha384", 00:20:25.782 "dhgroup": "ffdhe6144" 00:20:25.782 } 00:20:25.782 } 00:20:25.782 ]' 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.782 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.042 08:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.985 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.986 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.247 00:20:27.247 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.247 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.247 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.508 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.508 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.508 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.508 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.508 08:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.508 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.508 { 00:20:27.508 "cntlid": 85, 00:20:27.508 "qid": 0, 00:20:27.508 "state": "enabled", 00:20:27.508 "listen_address": { 00:20:27.508 "trtype": "TCP", 00:20:27.508 "adrfam": "IPv4", 00:20:27.508 "traddr": "10.0.0.2", 00:20:27.508 "trsvcid": "4420" 00:20:27.508 }, 00:20:27.508 "peer_address": { 00:20:27.508 "trtype": "TCP", 00:20:27.508 "adrfam": "IPv4", 00:20:27.508 "traddr": "10.0.0.1", 00:20:27.508 "trsvcid": "40260" 00:20:27.508 }, 00:20:27.508 "auth": { 00:20:27.508 "state": "completed", 00:20:27.508 "digest": "sha384", 00:20:27.508 "dhgroup": "ffdhe6144" 00:20:27.508 } 00:20:27.508 } 00:20:27.508 ]' 00:20:27.508 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.509 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.509 08:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.509 08:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.509 08:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.769 08:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.769 08:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.769 08:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.769 08:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:28.710 08:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.710 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.971 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.232 { 00:20:29.232 "cntlid": 87, 00:20:29.232 "qid": 0, 00:20:29.232 "state": "enabled", 00:20:29.232 "listen_address": { 00:20:29.232 "trtype": "TCP", 00:20:29.232 "adrfam": "IPv4", 00:20:29.232 "traddr": "10.0.0.2", 00:20:29.232 "trsvcid": "4420" 00:20:29.232 }, 00:20:29.232 "peer_address": { 00:20:29.232 "trtype": "TCP", 00:20:29.232 "adrfam": "IPv4", 00:20:29.232 "traddr": "10.0.0.1", 00:20:29.232 "trsvcid": "40274" 00:20:29.232 }, 00:20:29.232 "auth": { 00:20:29.232 "state": "completed", 00:20:29.232 "digest": "sha384", 00:20:29.232 "dhgroup": "ffdhe6144" 00:20:29.232 } 00:20:29.232 } 00:20:29.232 ]' 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.232 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.493 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.493 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.493 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.493 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.493 08:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.493 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.435 08:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.436 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.436 08:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.007 00:20:31.007 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.007 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.007 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.308 { 00:20:31.308 "cntlid": 89, 00:20:31.308 "qid": 0, 00:20:31.308 "state": "enabled", 00:20:31.308 "listen_address": { 00:20:31.308 "trtype": "TCP", 00:20:31.308 "adrfam": "IPv4", 00:20:31.308 "traddr": "10.0.0.2", 00:20:31.308 "trsvcid": "4420" 00:20:31.308 }, 00:20:31.308 "peer_address": { 00:20:31.308 "trtype": "TCP", 00:20:31.308 "adrfam": "IPv4", 00:20:31.308 "traddr": "10.0.0.1", 00:20:31.308 "trsvcid": "33632" 00:20:31.308 }, 00:20:31.308 "auth": { 00:20:31.308 "state": "completed", 00:20:31.308 "digest": "sha384", 00:20:31.308 "dhgroup": "ffdhe8192" 00:20:31.308 } 00:20:31.308 } 00:20:31.308 ]' 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.308 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.568 08:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:32.140 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.402 08:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.973 00:20:32.974 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.974 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.974 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.235 { 00:20:33.235 "cntlid": 91, 00:20:33.235 "qid": 0, 00:20:33.235 "state": "enabled", 00:20:33.235 "listen_address": { 00:20:33.235 "trtype": "TCP", 00:20:33.235 "adrfam": "IPv4", 00:20:33.235 "traddr": "10.0.0.2", 00:20:33.235 "trsvcid": "4420" 00:20:33.235 }, 00:20:33.235 "peer_address": { 00:20:33.235 "trtype": "TCP", 00:20:33.235 "adrfam": "IPv4", 00:20:33.235 "traddr": "10.0.0.1", 00:20:33.235 "trsvcid": "33662" 00:20:33.235 }, 00:20:33.235 "auth": { 00:20:33.235 "state": "completed", 00:20:33.235 "digest": "sha384", 00:20:33.235 "dhgroup": "ffdhe8192" 00:20:33.235 } 00:20:33.235 } 00:20:33.235 ]' 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.235 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.496 08:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:34.068 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.068 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.068 08:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.068 08:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.068 08:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.068 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.068 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.068 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.330 08:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.902 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.902 { 00:20:34.902 "cntlid": 93, 00:20:34.902 "qid": 0, 00:20:34.902 "state": "enabled", 00:20:34.902 "listen_address": { 00:20:34.902 "trtype": "TCP", 00:20:34.902 "adrfam": "IPv4", 00:20:34.902 "traddr": "10.0.0.2", 00:20:34.902 "trsvcid": "4420" 00:20:34.902 }, 00:20:34.902 "peer_address": { 00:20:34.902 "trtype": "TCP", 00:20:34.902 "adrfam": "IPv4", 00:20:34.902 "traddr": "10.0.0.1", 00:20:34.902 "trsvcid": "33694" 00:20:34.902 }, 00:20:34.902 "auth": { 00:20:34.902 "state": "completed", 00:20:34.902 "digest": "sha384", 00:20:34.902 "dhgroup": "ffdhe8192" 00:20:34.902 } 00:20:34.902 } 00:20:34.902 ]' 00:20:34.902 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.163 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.163 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.163 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.163 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.163 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.163 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.163 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.425 08:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:35.996 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.996 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.996 08:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.996 08:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.996 08:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.996 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.996 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.996 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.257 08:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.830 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.830 { 00:20:36.830 "cntlid": 95, 00:20:36.830 "qid": 0, 00:20:36.830 "state": "enabled", 00:20:36.830 "listen_address": { 00:20:36.830 "trtype": "TCP", 00:20:36.830 "adrfam": "IPv4", 00:20:36.830 "traddr": "10.0.0.2", 00:20:36.830 "trsvcid": "4420" 00:20:36.830 }, 00:20:36.830 "peer_address": { 00:20:36.830 "trtype": "TCP", 00:20:36.830 "adrfam": "IPv4", 00:20:36.830 "traddr": "10.0.0.1", 00:20:36.830 "trsvcid": "33728" 00:20:36.830 }, 00:20:36.830 "auth": { 00:20:36.830 "state": "completed", 00:20:36.830 "digest": "sha384", 00:20:36.830 "dhgroup": "ffdhe8192" 00:20:36.830 } 00:20:36.830 } 00:20:36.830 ]' 00:20:36.830 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.091 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.091 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.091 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.091 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.091 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.091 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.091 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.352 08:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.924 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.185 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.447 00:20:38.447 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.447 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.447 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.447 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.447 08:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.447 08:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.447 08:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.707 { 00:20:38.707 "cntlid": 97, 00:20:38.707 "qid": 0, 00:20:38.707 "state": "enabled", 00:20:38.707 "listen_address": { 00:20:38.707 "trtype": "TCP", 00:20:38.707 "adrfam": "IPv4", 00:20:38.707 "traddr": "10.0.0.2", 00:20:38.707 "trsvcid": "4420" 00:20:38.707 }, 00:20:38.707 "peer_address": { 00:20:38.707 "trtype": "TCP", 00:20:38.707 "adrfam": "IPv4", 00:20:38.707 "traddr": "10.0.0.1", 00:20:38.707 "trsvcid": "33752" 00:20:38.707 }, 00:20:38.707 "auth": { 00:20:38.707 "state": "completed", 00:20:38.707 "digest": "sha512", 00:20:38.707 "dhgroup": "null" 00:20:38.707 } 00:20:38.707 } 00:20:38.707 ]' 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.707 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.969 08:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:39.539 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.539 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.539 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.539 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.539 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.539 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.539 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.539 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.800 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.061 00:20:40.061 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.061 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.061 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.321 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.321 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.321 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.321 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.321 08:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.321 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.321 { 00:20:40.321 "cntlid": 99, 00:20:40.321 "qid": 0, 00:20:40.321 "state": "enabled", 00:20:40.321 "listen_address": { 00:20:40.321 "trtype": "TCP", 00:20:40.322 "adrfam": "IPv4", 00:20:40.322 "traddr": "10.0.0.2", 00:20:40.322 "trsvcid": "4420" 00:20:40.322 }, 00:20:40.322 "peer_address": { 00:20:40.322 "trtype": "TCP", 00:20:40.322 "adrfam": "IPv4", 00:20:40.322 "traddr": "10.0.0.1", 00:20:40.322 "trsvcid": "33788" 00:20:40.322 }, 00:20:40.322 "auth": { 00:20:40.322 "state": "completed", 00:20:40.322 "digest": "sha512", 00:20:40.322 "dhgroup": "null" 00:20:40.322 } 00:20:40.322 } 00:20:40.322 ]' 00:20:40.322 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.322 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.322 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.322 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:40.322 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.322 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.322 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.322 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.582 08:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:41.152 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.413 08:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.673 00:20:41.673 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.673 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.673 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.934 { 00:20:41.934 "cntlid": 101, 00:20:41.934 "qid": 0, 00:20:41.934 "state": "enabled", 00:20:41.934 "listen_address": { 00:20:41.934 "trtype": "TCP", 00:20:41.934 "adrfam": "IPv4", 00:20:41.934 "traddr": "10.0.0.2", 00:20:41.934 "trsvcid": "4420" 00:20:41.934 }, 00:20:41.934 "peer_address": { 00:20:41.934 "trtype": "TCP", 00:20:41.934 "adrfam": "IPv4", 00:20:41.934 "traddr": "10.0.0.1", 00:20:41.934 "trsvcid": "60958" 00:20:41.934 }, 00:20:41.934 "auth": { 00:20:41.934 "state": "completed", 00:20:41.934 "digest": "sha512", 00:20:41.934 "dhgroup": "null" 00:20:41.934 } 00:20:41.934 } 00:20:41.934 ]' 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.934 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.195 08:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:43.138 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.139 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.399 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.399 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.400 { 00:20:43.400 "cntlid": 103, 00:20:43.400 "qid": 0, 00:20:43.400 "state": "enabled", 00:20:43.400 "listen_address": { 00:20:43.400 "trtype": "TCP", 00:20:43.400 "adrfam": "IPv4", 00:20:43.400 "traddr": "10.0.0.2", 00:20:43.400 "trsvcid": "4420" 00:20:43.400 }, 00:20:43.400 "peer_address": { 00:20:43.400 "trtype": "TCP", 00:20:43.400 "adrfam": "IPv4", 00:20:43.400 "traddr": "10.0.0.1", 00:20:43.400 "trsvcid": "60984" 00:20:43.400 }, 00:20:43.400 "auth": { 00:20:43.400 "state": "completed", 00:20:43.400 "digest": "sha512", 00:20:43.400 "dhgroup": "null" 00:20:43.400 } 00:20:43.400 } 00:20:43.400 ]' 00:20:43.400 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.661 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.661 08:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.661 08:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:43.661 08:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.661 08:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.661 08:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.661 08:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.922 08:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:44.493 08:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.493 08:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.493 08:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.493 08:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.493 08:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.493 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.493 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.493 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.493 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.754 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.014 00:20:45.014 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.014 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.014 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.275 { 00:20:45.275 "cntlid": 105, 00:20:45.275 "qid": 0, 00:20:45.275 "state": "enabled", 00:20:45.275 "listen_address": { 00:20:45.275 "trtype": "TCP", 00:20:45.275 "adrfam": "IPv4", 00:20:45.275 "traddr": "10.0.0.2", 00:20:45.275 "trsvcid": "4420" 00:20:45.275 }, 00:20:45.275 "peer_address": { 00:20:45.275 "trtype": "TCP", 00:20:45.275 "adrfam": "IPv4", 00:20:45.275 "traddr": "10.0.0.1", 00:20:45.275 "trsvcid": "32776" 00:20:45.275 }, 00:20:45.275 "auth": { 00:20:45.275 "state": "completed", 00:20:45.275 "digest": "sha512", 00:20:45.275 "dhgroup": "ffdhe2048" 00:20:45.275 } 00:20:45.275 } 00:20:45.275 ]' 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.275 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.536 08:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:46.156 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.156 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.156 08:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.156 08:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.156 08:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.156 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.156 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.156 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.417 08:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.678 00:20:46.678 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.678 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.678 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.678 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.678 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.678 08:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.678 08:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.939 { 00:20:46.939 "cntlid": 107, 00:20:46.939 "qid": 0, 00:20:46.939 "state": "enabled", 00:20:46.939 "listen_address": { 00:20:46.939 "trtype": "TCP", 00:20:46.939 "adrfam": "IPv4", 00:20:46.939 "traddr": "10.0.0.2", 00:20:46.939 "trsvcid": "4420" 00:20:46.939 }, 00:20:46.939 "peer_address": { 00:20:46.939 "trtype": "TCP", 00:20:46.939 "adrfam": "IPv4", 00:20:46.939 "traddr": "10.0.0.1", 00:20:46.939 "trsvcid": "32810" 00:20:46.939 }, 00:20:46.939 "auth": { 00:20:46.939 "state": "completed", 00:20:46.939 "digest": "sha512", 00:20:46.939 "dhgroup": "ffdhe2048" 00:20:46.939 } 00:20:46.939 } 00:20:46.939 ]' 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.939 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.201 08:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:47.773 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.773 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.773 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.773 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.773 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.773 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.773 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.773 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.034 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.295 00:20:48.295 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.295 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.295 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.556 { 00:20:48.556 "cntlid": 109, 00:20:48.556 "qid": 0, 00:20:48.556 "state": "enabled", 00:20:48.556 "listen_address": { 00:20:48.556 "trtype": "TCP", 00:20:48.556 "adrfam": "IPv4", 00:20:48.556 "traddr": "10.0.0.2", 00:20:48.556 "trsvcid": "4420" 00:20:48.556 }, 00:20:48.556 "peer_address": { 00:20:48.556 "trtype": "TCP", 00:20:48.556 "adrfam": "IPv4", 00:20:48.556 "traddr": "10.0.0.1", 00:20:48.556 "trsvcid": "32844" 00:20:48.556 }, 00:20:48.556 "auth": { 00:20:48.556 "state": "completed", 00:20:48.556 "digest": "sha512", 00:20:48.556 "dhgroup": "ffdhe2048" 00:20:48.556 } 00:20:48.556 } 00:20:48.556 ]' 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.556 08:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.556 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.556 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.556 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.817 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:49.388 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.388 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:49.388 08:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.388 08:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.649 08:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.649 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.649 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.649 08:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.649 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.909 00:20:49.909 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.909 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.909 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.173 { 00:20:50.173 "cntlid": 111, 00:20:50.173 "qid": 0, 00:20:50.173 "state": "enabled", 00:20:50.173 "listen_address": { 00:20:50.173 "trtype": "TCP", 00:20:50.173 "adrfam": "IPv4", 00:20:50.173 "traddr": "10.0.0.2", 00:20:50.173 "trsvcid": "4420" 00:20:50.173 }, 00:20:50.173 "peer_address": { 00:20:50.173 "trtype": "TCP", 00:20:50.173 "adrfam": "IPv4", 00:20:50.173 "traddr": "10.0.0.1", 00:20:50.173 "trsvcid": "32874" 00:20:50.173 }, 00:20:50.173 "auth": { 00:20:50.173 "state": "completed", 00:20:50.173 "digest": "sha512", 00:20:50.173 "dhgroup": "ffdhe2048" 00:20:50.173 } 00:20:50.173 } 00:20:50.173 ]' 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.173 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.434 08:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:51.005 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.266 08:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.527 00:20:51.527 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.527 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.527 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.788 { 00:20:51.788 "cntlid": 113, 00:20:51.788 "qid": 0, 00:20:51.788 "state": "enabled", 00:20:51.788 "listen_address": { 00:20:51.788 "trtype": "TCP", 00:20:51.788 "adrfam": "IPv4", 00:20:51.788 "traddr": "10.0.0.2", 00:20:51.788 "trsvcid": "4420" 00:20:51.788 }, 00:20:51.788 "peer_address": { 00:20:51.788 "trtype": "TCP", 00:20:51.788 "adrfam": "IPv4", 00:20:51.788 "traddr": "10.0.0.1", 00:20:51.788 "trsvcid": "54848" 00:20:51.788 }, 00:20:51.788 "auth": { 00:20:51.788 "state": "completed", 00:20:51.788 "digest": "sha512", 00:20:51.788 "dhgroup": "ffdhe3072" 00:20:51.788 } 00:20:51.788 } 00:20:51.788 ]' 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.788 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.049 08:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.992 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.253 00:20:53.253 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.253 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.253 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.514 { 00:20:53.514 "cntlid": 115, 00:20:53.514 "qid": 0, 00:20:53.514 "state": "enabled", 00:20:53.514 "listen_address": { 00:20:53.514 "trtype": "TCP", 00:20:53.514 "adrfam": "IPv4", 00:20:53.514 "traddr": "10.0.0.2", 00:20:53.514 "trsvcid": "4420" 00:20:53.514 }, 00:20:53.514 "peer_address": { 00:20:53.514 "trtype": "TCP", 00:20:53.514 "adrfam": "IPv4", 00:20:53.514 "traddr": "10.0.0.1", 00:20:53.514 "trsvcid": "54880" 00:20:53.514 }, 00:20:53.514 "auth": { 00:20:53.514 "state": "completed", 00:20:53.514 "digest": "sha512", 00:20:53.514 "dhgroup": "ffdhe3072" 00:20:53.514 } 00:20:53.514 } 00:20:53.514 ]' 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.514 08:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.775 08:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:20:54.346 08:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.607 08:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.607 08:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.607 08:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.607 08:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.607 08:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.607 08:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.607 08:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.607 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.868 00:20:54.868 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.868 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.868 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.129 { 00:20:55.129 "cntlid": 117, 00:20:55.129 "qid": 0, 00:20:55.129 "state": "enabled", 00:20:55.129 "listen_address": { 00:20:55.129 "trtype": "TCP", 00:20:55.129 "adrfam": "IPv4", 00:20:55.129 "traddr": "10.0.0.2", 00:20:55.129 "trsvcid": "4420" 00:20:55.129 }, 00:20:55.129 "peer_address": { 00:20:55.129 "trtype": "TCP", 00:20:55.129 "adrfam": "IPv4", 00:20:55.129 "traddr": "10.0.0.1", 00:20:55.129 "trsvcid": "54916" 00:20:55.129 }, 00:20:55.129 "auth": { 00:20:55.129 "state": "completed", 00:20:55.129 "digest": "sha512", 00:20:55.129 "dhgroup": "ffdhe3072" 00:20:55.129 } 00:20:55.129 } 00:20:55.129 ]' 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.129 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.390 08:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.331 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.592 00:20:56.592 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.592 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.592 08:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.592 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.592 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.592 08:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.592 08:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.853 08:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.854 { 00:20:56.854 "cntlid": 119, 00:20:56.854 "qid": 0, 00:20:56.854 "state": "enabled", 00:20:56.854 "listen_address": { 00:20:56.854 "trtype": "TCP", 00:20:56.854 "adrfam": "IPv4", 00:20:56.854 "traddr": "10.0.0.2", 00:20:56.854 "trsvcid": "4420" 00:20:56.854 }, 00:20:56.854 "peer_address": { 00:20:56.854 "trtype": "TCP", 00:20:56.854 "adrfam": "IPv4", 00:20:56.854 "traddr": "10.0.0.1", 00:20:56.854 "trsvcid": "54934" 00:20:56.854 }, 00:20:56.854 "auth": { 00:20:56.854 "state": "completed", 00:20:56.854 "digest": "sha512", 00:20:56.854 "dhgroup": "ffdhe3072" 00:20:56.854 } 00:20:56.854 } 00:20:56.854 ]' 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.854 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.115 08:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.688 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.949 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.210 00:20:58.210 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.210 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.210 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.470 { 00:20:58.470 "cntlid": 121, 00:20:58.470 "qid": 0, 00:20:58.470 "state": "enabled", 00:20:58.470 "listen_address": { 00:20:58.470 "trtype": "TCP", 00:20:58.470 "adrfam": "IPv4", 00:20:58.470 "traddr": "10.0.0.2", 00:20:58.470 "trsvcid": "4420" 00:20:58.470 }, 00:20:58.470 "peer_address": { 00:20:58.470 "trtype": "TCP", 00:20:58.470 "adrfam": "IPv4", 00:20:58.470 "traddr": "10.0.0.1", 00:20:58.470 "trsvcid": "54962" 00:20:58.470 }, 00:20:58.470 "auth": { 00:20:58.470 "state": "completed", 00:20:58.470 "digest": "sha512", 00:20:58.470 "dhgroup": "ffdhe4096" 00:20:58.470 } 00:20:58.470 } 00:20:58.470 ]' 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.470 08:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.730 08:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:20:59.674 08:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.674 08:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.674 08:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.674 08:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.674 08:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.674 08:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.674 08:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.674 08:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.674 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.935 00:20:59.935 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.935 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.935 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.935 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.935 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.935 08:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.935 08:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.199 { 00:21:00.199 "cntlid": 123, 00:21:00.199 "qid": 0, 00:21:00.199 "state": "enabled", 00:21:00.199 "listen_address": { 00:21:00.199 "trtype": "TCP", 00:21:00.199 "adrfam": "IPv4", 00:21:00.199 "traddr": "10.0.0.2", 00:21:00.199 "trsvcid": "4420" 00:21:00.199 }, 00:21:00.199 "peer_address": { 00:21:00.199 "trtype": "TCP", 00:21:00.199 "adrfam": "IPv4", 00:21:00.199 "traddr": "10.0.0.1", 00:21:00.199 "trsvcid": "54988" 00:21:00.199 }, 00:21:00.199 "auth": { 00:21:00.199 "state": "completed", 00:21:00.199 "digest": "sha512", 00:21:00.199 "dhgroup": "ffdhe4096" 00:21:00.199 } 00:21:00.199 } 00:21:00.199 ]' 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.199 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.505 08:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:21:01.076 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.076 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.076 08:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.076 08:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.076 08:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.076 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.076 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.076 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.338 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.598 00:21:01.598 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.598 08:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.598 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.598 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.859 { 00:21:01.859 "cntlid": 125, 00:21:01.859 "qid": 0, 00:21:01.859 "state": "enabled", 00:21:01.859 "listen_address": { 00:21:01.859 "trtype": "TCP", 00:21:01.859 "adrfam": "IPv4", 00:21:01.859 "traddr": "10.0.0.2", 00:21:01.859 "trsvcid": "4420" 00:21:01.859 }, 00:21:01.859 "peer_address": { 00:21:01.859 "trtype": "TCP", 00:21:01.859 "adrfam": "IPv4", 00:21:01.859 "traddr": "10.0.0.1", 00:21:01.859 "trsvcid": "57540" 00:21:01.859 }, 00:21:01.859 "auth": { 00:21:01.859 "state": "completed", 00:21:01.859 "digest": "sha512", 00:21:01.859 "dhgroup": "ffdhe4096" 00:21:01.859 } 00:21:01.859 } 00:21:01.859 ]' 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.859 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.119 08:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:21:02.690 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.690 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.690 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.690 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.690 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.690 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.690 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.690 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.951 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.211 00:21:03.211 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.212 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.212 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.472 { 00:21:03.472 "cntlid": 127, 00:21:03.472 "qid": 0, 00:21:03.472 "state": "enabled", 00:21:03.472 "listen_address": { 00:21:03.472 "trtype": "TCP", 00:21:03.472 "adrfam": "IPv4", 00:21:03.472 "traddr": "10.0.0.2", 00:21:03.472 "trsvcid": "4420" 00:21:03.472 }, 00:21:03.472 "peer_address": { 00:21:03.472 "trtype": "TCP", 00:21:03.472 "adrfam": "IPv4", 00:21:03.472 "traddr": "10.0.0.1", 00:21:03.472 "trsvcid": "57570" 00:21:03.472 }, 00:21:03.472 "auth": { 00:21:03.472 "state": "completed", 00:21:03.472 "digest": "sha512", 00:21:03.472 "dhgroup": "ffdhe4096" 00:21:03.472 } 00:21:03.472 } 00:21:03.472 ]' 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.472 08:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.733 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:21:04.303 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.303 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:04.303 08:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.303 08:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.303 08:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.303 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.303 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.304 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.304 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.564 08:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.825 00:21:04.825 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.825 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.825 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.085 { 00:21:05.085 "cntlid": 129, 00:21:05.085 "qid": 0, 00:21:05.085 "state": "enabled", 00:21:05.085 "listen_address": { 00:21:05.085 "trtype": "TCP", 00:21:05.085 "adrfam": "IPv4", 00:21:05.085 "traddr": "10.0.0.2", 00:21:05.085 "trsvcid": "4420" 00:21:05.085 }, 00:21:05.085 "peer_address": { 00:21:05.085 "trtype": "TCP", 00:21:05.085 "adrfam": "IPv4", 00:21:05.085 "traddr": "10.0.0.1", 00:21:05.085 "trsvcid": "57586" 00:21:05.085 }, 00:21:05.085 "auth": { 00:21:05.085 "state": "completed", 00:21:05.085 "digest": "sha512", 00:21:05.085 "dhgroup": "ffdhe6144" 00:21:05.085 } 00:21:05.085 } 00:21:05.085 ]' 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.085 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.345 08:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:21:05.915 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.915 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.915 08:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.915 08:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.915 08:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.915 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.915 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.915 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.176 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.437 00:21:06.437 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.437 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.437 08:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.698 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.698 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.698 08:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.698 08:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.698 08:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.698 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.698 { 00:21:06.698 "cntlid": 131, 00:21:06.698 "qid": 0, 00:21:06.698 "state": "enabled", 00:21:06.698 "listen_address": { 00:21:06.698 "trtype": "TCP", 00:21:06.698 "adrfam": "IPv4", 00:21:06.698 "traddr": "10.0.0.2", 00:21:06.698 "trsvcid": "4420" 00:21:06.698 }, 00:21:06.698 "peer_address": { 00:21:06.698 "trtype": "TCP", 00:21:06.698 "adrfam": "IPv4", 00:21:06.698 "traddr": "10.0.0.1", 00:21:06.698 "trsvcid": "57602" 00:21:06.698 }, 00:21:06.698 "auth": { 00:21:06.698 "state": "completed", 00:21:06.698 "digest": "sha512", 00:21:06.698 "dhgroup": "ffdhe6144" 00:21:06.698 } 00:21:06.698 } 00:21:06.699 ]' 00:21:06.699 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.699 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.699 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.699 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.699 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.959 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.959 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.959 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.959 08:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.902 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.474 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.474 { 00:21:08.474 "cntlid": 133, 00:21:08.474 "qid": 0, 00:21:08.474 "state": "enabled", 00:21:08.474 "listen_address": { 00:21:08.474 "trtype": "TCP", 00:21:08.474 "adrfam": "IPv4", 00:21:08.474 "traddr": "10.0.0.2", 00:21:08.474 "trsvcid": "4420" 00:21:08.474 }, 00:21:08.474 "peer_address": { 00:21:08.474 "trtype": "TCP", 00:21:08.474 "adrfam": "IPv4", 00:21:08.474 "traddr": "10.0.0.1", 00:21:08.474 "trsvcid": "57628" 00:21:08.474 }, 00:21:08.474 "auth": { 00:21:08.474 "state": "completed", 00:21:08.474 "digest": "sha512", 00:21:08.474 "dhgroup": "ffdhe6144" 00:21:08.474 } 00:21:08.474 } 00:21:08.474 ]' 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.474 08:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.474 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.474 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.735 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.735 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.736 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.736 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:21:09.678 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.678 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.678 08:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.678 08:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.678 08:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.678 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.678 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.678 08:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.678 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.940 00:21:09.940 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.940 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.940 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.201 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.202 { 00:21:10.202 "cntlid": 135, 00:21:10.202 "qid": 0, 00:21:10.202 "state": "enabled", 00:21:10.202 "listen_address": { 00:21:10.202 "trtype": "TCP", 00:21:10.202 "adrfam": "IPv4", 00:21:10.202 "traddr": "10.0.0.2", 00:21:10.202 "trsvcid": "4420" 00:21:10.202 }, 00:21:10.202 "peer_address": { 00:21:10.202 "trtype": "TCP", 00:21:10.202 "adrfam": "IPv4", 00:21:10.202 "traddr": "10.0.0.1", 00:21:10.202 "trsvcid": "57656" 00:21:10.202 }, 00:21:10.202 "auth": { 00:21:10.202 "state": "completed", 00:21:10.202 "digest": "sha512", 00:21:10.202 "dhgroup": "ffdhe6144" 00:21:10.202 } 00:21:10.202 } 00:21:10.202 ]' 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.202 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.462 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.462 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.462 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.462 08:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.405 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.406 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.406 08:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.406 08:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.406 08:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.406 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.406 08:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.979 00:21:11.979 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.979 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.979 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.240 { 00:21:12.240 "cntlid": 137, 00:21:12.240 "qid": 0, 00:21:12.240 "state": "enabled", 00:21:12.240 "listen_address": { 00:21:12.240 "trtype": "TCP", 00:21:12.240 "adrfam": "IPv4", 00:21:12.240 "traddr": "10.0.0.2", 00:21:12.240 "trsvcid": "4420" 00:21:12.240 }, 00:21:12.240 "peer_address": { 00:21:12.240 "trtype": "TCP", 00:21:12.240 "adrfam": "IPv4", 00:21:12.240 "traddr": "10.0.0.1", 00:21:12.240 "trsvcid": "37214" 00:21:12.240 }, 00:21:12.240 "auth": { 00:21:12.240 "state": "completed", 00:21:12.240 "digest": "sha512", 00:21:12.240 "dhgroup": "ffdhe8192" 00:21:12.240 } 00:21:12.240 } 00:21:12.240 ]' 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.240 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.501 08:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:21:13.073 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.073 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.073 08:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.073 08:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.073 08:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.073 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.073 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.073 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.335 08:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.906 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.906 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.906 { 00:21:13.906 "cntlid": 139, 00:21:13.906 "qid": 0, 00:21:13.906 "state": "enabled", 00:21:13.906 "listen_address": { 00:21:13.906 "trtype": "TCP", 00:21:13.906 "adrfam": "IPv4", 00:21:13.906 "traddr": "10.0.0.2", 00:21:13.906 "trsvcid": "4420" 00:21:13.906 }, 00:21:13.906 "peer_address": { 00:21:13.906 "trtype": "TCP", 00:21:13.906 "adrfam": "IPv4", 00:21:13.906 "traddr": "10.0.0.1", 00:21:13.906 "trsvcid": "37258" 00:21:13.906 }, 00:21:13.906 "auth": { 00:21:13.906 "state": "completed", 00:21:13.906 "digest": "sha512", 00:21:13.906 "dhgroup": "ffdhe8192" 00:21:13.906 } 00:21:13.906 } 00:21:13.906 ]' 00:21:13.907 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.168 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.168 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.168 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.168 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.168 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.168 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.168 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.428 08:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MzkzYTJhN2JiZTNmZTMwN2MxYzlkMWUzNWJmYWJmMmZPRL1p: --dhchap-ctrl-secret DHHC-1:02:YmU0OWZmMmYyNjA2ZWVkNDk5MTEzYWE4N2VjZDc1NWQ5NjA4MWI4ZDQ1MDIwNGNh643I1w==: 00:21:15.004 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.004 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.004 08:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.004 08:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.004 08:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.004 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.004 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.004 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.303 08:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.875 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.875 { 00:21:15.875 "cntlid": 141, 00:21:15.875 "qid": 0, 00:21:15.875 "state": "enabled", 00:21:15.875 "listen_address": { 00:21:15.875 "trtype": "TCP", 00:21:15.875 "adrfam": "IPv4", 00:21:15.875 "traddr": "10.0.0.2", 00:21:15.875 "trsvcid": "4420" 00:21:15.875 }, 00:21:15.875 "peer_address": { 00:21:15.875 "trtype": "TCP", 00:21:15.875 "adrfam": "IPv4", 00:21:15.875 "traddr": "10.0.0.1", 00:21:15.875 "trsvcid": "37296" 00:21:15.875 }, 00:21:15.875 "auth": { 00:21:15.875 "state": "completed", 00:21:15.875 "digest": "sha512", 00:21:15.875 "dhgroup": "ffdhe8192" 00:21:15.875 } 00:21:15.875 } 00:21:15.875 ]' 00:21:15.875 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.136 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.136 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.136 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.136 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.136 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.136 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.136 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.397 08:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YzUwYjU4YTQ1ZTNlNzNkM2IxMzRlNzMyMDU3NmQ0OTU3OTM2NDhjZTJhNmQyNTEygnshyQ==: --dhchap-ctrl-secret DHHC-1:01:MTk5ZjcxY2Y4YjlkY2ViMGRlMDJlZjM5M2M5ZjA2MGG+oj4m: 00:21:16.969 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.969 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.969 08:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.969 08:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.969 08:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.969 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.969 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.969 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.229 08:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.800 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.800 { 00:21:17.800 "cntlid": 143, 00:21:17.800 "qid": 0, 00:21:17.800 "state": "enabled", 00:21:17.800 "listen_address": { 00:21:17.800 "trtype": "TCP", 00:21:17.800 "adrfam": "IPv4", 00:21:17.800 "traddr": "10.0.0.2", 00:21:17.800 "trsvcid": "4420" 00:21:17.800 }, 00:21:17.800 "peer_address": { 00:21:17.800 "trtype": "TCP", 00:21:17.800 "adrfam": "IPv4", 00:21:17.800 "traddr": "10.0.0.1", 00:21:17.800 "trsvcid": "37330" 00:21:17.800 }, 00:21:17.800 "auth": { 00:21:17.800 "state": "completed", 00:21:17.800 "digest": "sha512", 00:21:17.800 "dhgroup": "ffdhe8192" 00:21:17.800 } 00:21:17.800 } 00:21:17.800 ]' 00:21:17.800 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.061 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.061 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.061 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.061 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.061 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.061 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.061 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.322 08:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.895 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.156 08:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.728 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.728 { 00:21:19.728 "cntlid": 145, 00:21:19.728 "qid": 0, 00:21:19.728 "state": "enabled", 00:21:19.728 "listen_address": { 00:21:19.728 "trtype": "TCP", 00:21:19.728 "adrfam": "IPv4", 00:21:19.728 "traddr": "10.0.0.2", 00:21:19.728 "trsvcid": "4420" 00:21:19.728 }, 00:21:19.728 "peer_address": { 00:21:19.728 "trtype": "TCP", 00:21:19.728 "adrfam": "IPv4", 00:21:19.728 "traddr": "10.0.0.1", 00:21:19.728 "trsvcid": "37350" 00:21:19.728 }, 00:21:19.728 "auth": { 00:21:19.728 "state": "completed", 00:21:19.728 "digest": "sha512", 00:21:19.728 "dhgroup": "ffdhe8192" 00:21:19.728 } 00:21:19.728 } 00:21:19.728 ]' 00:21:19.728 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.988 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.988 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.988 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.988 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.988 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.988 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.988 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.988 08:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGM5NjllNGUxMmI4NmM1YjVkY2UxOGFjZDM2ZGViNDkzODRhZmEyNDBlZWU5MjIwYXV8cQ==: --dhchap-ctrl-secret DHHC-1:03:YjQxYTU5YjI4ZGZmMzc1NGMzMTdhYTM1NTEzYzM0MmJhMGUwOTg5NGU0NjIwOTFlOWU2ODg5OGJjNjZmN2JjZhFmy38=: 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:20.930 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:20.931 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:20.931 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:20.931 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:20.931 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:20.931 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:20.931 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:20.931 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:21.502 request: 00:21:21.502 { 00:21:21.502 "name": "nvme0", 00:21:21.502 "trtype": "tcp", 00:21:21.502 "traddr": "10.0.0.2", 00:21:21.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:21.502 "adrfam": "ipv4", 00:21:21.502 "trsvcid": "4420", 00:21:21.502 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.502 "dhchap_key": "key2", 00:21:21.502 "method": "bdev_nvme_attach_controller", 00:21:21.502 "req_id": 1 00:21:21.502 } 00:21:21.502 Got JSON-RPC error response 00:21:21.502 response: 00:21:21.502 { 00:21:21.502 "code": -5, 00:21:21.502 "message": "Input/output error" 00:21:21.502 } 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.502 08:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.763 request: 00:21:21.763 { 00:21:21.763 "name": "nvme0", 00:21:21.763 "trtype": "tcp", 00:21:21.763 "traddr": "10.0.0.2", 00:21:21.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:21.763 "adrfam": "ipv4", 00:21:21.763 "trsvcid": "4420", 00:21:21.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.763 "dhchap_key": "key1", 00:21:21.763 "dhchap_ctrlr_key": "ckey2", 00:21:21.763 "method": "bdev_nvme_attach_controller", 00:21:21.763 "req_id": 1 00:21:21.763 } 00:21:21.763 Got JSON-RPC error response 00:21:21.763 response: 00:21:21.763 { 00:21:21.763 "code": -5, 00:21:21.763 "message": "Input/output error" 00:21:21.763 } 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:22.024 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.025 08:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.286 request: 00:21:22.286 { 00:21:22.286 "name": "nvme0", 00:21:22.286 "trtype": "tcp", 00:21:22.286 "traddr": "10.0.0.2", 00:21:22.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:22.286 "adrfam": "ipv4", 00:21:22.286 "trsvcid": "4420", 00:21:22.286 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:22.286 "dhchap_key": "key1", 00:21:22.286 "dhchap_ctrlr_key": "ckey1", 00:21:22.286 "method": "bdev_nvme_attach_controller", 00:21:22.286 "req_id": 1 00:21:22.286 } 00:21:22.286 Got JSON-RPC error response 00:21:22.286 response: 00:21:22.286 { 00:21:22.286 "code": -5, 00:21:22.286 "message": "Input/output error" 00:21:22.286 } 00:21:22.286 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:22.286 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:22.286 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:22.286 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:22.286 08:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.286 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.286 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2590939 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2590939 ']' 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2590939 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2590939 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2590939' 00:21:22.547 killing process with pid 2590939 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2590939 00:21:22.547 08:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2590939 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2617300 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2617300 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2617300 ']' 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.547 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:22.548 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2617300 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2617300 ']' 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:23.490 08:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.751 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.324 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.324 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.324 { 00:21:24.324 "cntlid": 1, 00:21:24.324 "qid": 0, 00:21:24.324 "state": "enabled", 00:21:24.324 "listen_address": { 00:21:24.324 "trtype": "TCP", 00:21:24.324 "adrfam": "IPv4", 00:21:24.324 "traddr": "10.0.0.2", 00:21:24.324 "trsvcid": "4420" 00:21:24.324 }, 00:21:24.324 "peer_address": { 00:21:24.324 "trtype": "TCP", 00:21:24.324 "adrfam": "IPv4", 00:21:24.324 "traddr": "10.0.0.1", 00:21:24.324 "trsvcid": "42994" 00:21:24.324 }, 00:21:24.324 "auth": { 00:21:24.324 "state": "completed", 00:21:24.324 "digest": "sha512", 00:21:24.324 "dhgroup": "ffdhe8192" 00:21:24.324 } 00:21:24.324 } 00:21:24.324 ]' 00:21:24.585 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.585 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.585 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.585 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.585 08:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.585 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.585 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.585 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.846 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZWY0Mzg0YmUxNzQ0MTEzMGQ2NDI4MGQ4M2Y5ZDY5M2Y2N2NkZDE1ZGQwZDNiNmE1OGE1Yzc3ZmM1NjgwMjVmZEfqR+c=: 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:25.417 08:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.678 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.939 request: 00:21:25.939 { 00:21:25.939 "name": "nvme0", 00:21:25.939 "trtype": "tcp", 00:21:25.939 "traddr": "10.0.0.2", 00:21:25.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:25.939 "adrfam": "ipv4", 00:21:25.939 "trsvcid": "4420", 00:21:25.939 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:25.939 "dhchap_key": "key3", 00:21:25.939 "method": "bdev_nvme_attach_controller", 00:21:25.939 "req_id": 1 00:21:25.939 } 00:21:25.939 Got JSON-RPC error response 00:21:25.939 response: 00:21:25.939 { 00:21:25.939 "code": -5, 00:21:25.939 "message": "Input/output error" 00:21:25.939 } 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:25.939 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.940 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:25.940 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:25.940 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:25.940 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:25.940 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.940 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.200 request: 00:21:26.200 { 00:21:26.200 "name": "nvme0", 00:21:26.200 "trtype": "tcp", 00:21:26.200 "traddr": "10.0.0.2", 00:21:26.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:26.200 "adrfam": "ipv4", 00:21:26.200 "trsvcid": "4420", 00:21:26.200 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.200 "dhchap_key": "key3", 00:21:26.200 "method": "bdev_nvme_attach_controller", 00:21:26.200 "req_id": 1 00:21:26.200 } 00:21:26.200 Got JSON-RPC error response 00:21:26.200 response: 00:21:26.200 { 00:21:26.200 "code": -5, 00:21:26.200 "message": "Input/output error" 00:21:26.200 } 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.200 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.462 request: 00:21:26.462 { 00:21:26.462 "name": "nvme0", 00:21:26.462 "trtype": "tcp", 00:21:26.462 "traddr": "10.0.0.2", 00:21:26.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:26.462 "adrfam": "ipv4", 00:21:26.462 "trsvcid": "4420", 00:21:26.462 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.462 "dhchap_key": "key0", 00:21:26.462 "dhchap_ctrlr_key": "key1", 00:21:26.462 "method": "bdev_nvme_attach_controller", 00:21:26.462 "req_id": 1 00:21:26.462 } 00:21:26.462 Got JSON-RPC error response 00:21:26.462 response: 00:21:26.462 { 00:21:26.462 "code": -5, 00:21:26.462 "message": "Input/output error" 00:21:26.462 } 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:26.462 08:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:26.723 00:21:26.723 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:26.723 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:26.723 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2591148 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2591148 ']' 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2591148 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2591148 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2591148' 00:21:26.983 killing process with pid 2591148 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2591148 00:21:26.983 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2591148 00:21:27.244 08:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:27.244 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.244 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:27.244 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.244 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:27.244 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.244 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.244 rmmod nvme_tcp 00:21:27.244 rmmod nvme_fabrics 00:21:27.244 rmmod nvme_keyring 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2617300 ']' 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2617300 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2617300 ']' 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2617300 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2617300 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2617300' 00:21:27.505 killing process with pid 2617300 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2617300 00:21:27.505 08:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2617300 00:21:27.505 08:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.505 08:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.505 08:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.505 08:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.505 08:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.505 08:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.505 08:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.505 08:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.051 08:59:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.051 08:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xaA /tmp/spdk.key-sha256.j6Z /tmp/spdk.key-sha384.Qao /tmp/spdk.key-sha512.OgQ /tmp/spdk.key-sha512.8EN /tmp/spdk.key-sha384.kMJ /tmp/spdk.key-sha256.ErU '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:30.051 00:21:30.051 real 2m23.737s 00:21:30.051 user 5m19.623s 00:21:30.051 sys 0m21.214s 00:21:30.051 08:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:30.051 08:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.051 ************************************ 00:21:30.051 END TEST nvmf_auth_target 00:21:30.051 ************************************ 00:21:30.051 08:59:52 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:30.051 08:59:52 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:30.051 08:59:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:21:30.051 08:59:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:30.051 08:59:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.051 ************************************ 00:21:30.051 START TEST nvmf_bdevio_no_huge 00:21:30.051 ************************************ 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:30.051 * Looking for test storage... 00:21:30.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.051 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.052 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.052 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.052 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.052 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.052 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.052 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.052 08:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:36.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:36.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:36.705 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:36.705 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.705 08:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.705 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.705 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.705 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:36.705 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.705 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.705 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.705 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:36.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:21:36.705 00:21:36.705 --- 10.0.0.2 ping statistics --- 00:21:36.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.705 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:21:36.705 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:21:36.705 00:21:36.706 --- 10.0.0.1 ping statistics --- 00:21:36.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.706 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2622350 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2622350 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 2622350 ']' 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:36.706 08:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:36.967 [2024-06-09 08:59:59.271780] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:21:36.967 [2024-06-09 08:59:59.271846] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:36.967 [2024-06-09 08:59:59.363231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.967 [2024-06-09 08:59:59.470517] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.967 [2024-06-09 08:59:59.470569] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.967 [2024-06-09 08:59:59.470577] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.967 [2024-06-09 08:59:59.470584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.967 [2024-06-09 08:59:59.470590] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.967 [2024-06-09 08:59:59.470752] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:21:36.967 [2024-06-09 08:59:59.471021] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:21:36.967 [2024-06-09 08:59:59.471182] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:21:36.967 [2024-06-09 08:59:59.471184] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.540 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:37.540 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:21:37.540 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:37.540 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:37.540 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:37.802 [2024-06-09 09:00:00.129236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:37.802 Malloc0 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:37.802 [2024-06-09 09:00:00.166972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.802 { 00:21:37.802 "params": { 00:21:37.802 "name": "Nvme$subsystem", 00:21:37.802 "trtype": "$TEST_TRANSPORT", 00:21:37.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.802 "adrfam": "ipv4", 00:21:37.802 "trsvcid": "$NVMF_PORT", 00:21:37.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.802 "hdgst": ${hdgst:-false}, 00:21:37.802 "ddgst": ${ddgst:-false} 00:21:37.802 }, 00:21:37.802 "method": "bdev_nvme_attach_controller" 00:21:37.802 } 00:21:37.802 EOF 00:21:37.802 )") 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:37.802 09:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:37.802 "params": { 00:21:37.802 "name": "Nvme1", 00:21:37.802 "trtype": "tcp", 00:21:37.802 "traddr": "10.0.0.2", 00:21:37.802 "adrfam": "ipv4", 00:21:37.802 "trsvcid": "4420", 00:21:37.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.802 "hdgst": false, 00:21:37.802 "ddgst": false 00:21:37.802 }, 00:21:37.802 "method": "bdev_nvme_attach_controller" 00:21:37.802 }' 00:21:37.802 [2024-06-09 09:00:00.220652] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:21:37.802 [2024-06-09 09:00:00.220727] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2622421 ] 00:21:37.802 [2024-06-09 09:00:00.291181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:38.064 [2024-06-09 09:00:00.388844] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.064 [2024-06-09 09:00:00.388963] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.064 [2024-06-09 09:00:00.388967] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.325 I/O targets: 00:21:38.325 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:38.325 00:21:38.325 00:21:38.325 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.325 http://cunit.sourceforge.net/ 00:21:38.325 00:21:38.325 00:21:38.325 Suite: bdevio tests on: Nvme1n1 00:21:38.325 Test: blockdev write read block ...passed 00:21:38.325 Test: blockdev write zeroes read block ...passed 00:21:38.325 Test: blockdev write zeroes read no split ...passed 00:21:38.587 Test: blockdev write zeroes read split ...passed 00:21:38.587 Test: blockdev write zeroes read split partial ...passed 00:21:38.587 Test: blockdev reset ...[2024-06-09 09:00:00.978785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:38.587 [2024-06-09 09:00:00.978846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980650 (9): Bad file descriptor 00:21:38.587 [2024-06-09 09:00:01.082495] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:38.587 passed 00:21:38.587 Test: blockdev write read 8 blocks ...passed 00:21:38.587 Test: blockdev write read size > 128k ...passed 00:21:38.587 Test: blockdev write read invalid size ...passed 00:21:38.587 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:38.587 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:38.587 Test: blockdev write read max offset ...passed 00:21:38.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:38.848 Test: blockdev writev readv 8 blocks ...passed 00:21:38.848 Test: blockdev writev readv 30 x 1block ...passed 00:21:38.848 Test: blockdev writev readv block ...passed 00:21:38.848 Test: blockdev writev readv size > 128k ...passed 00:21:38.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:38.848 Test: blockdev comparev and writev ...[2024-06-09 09:00:01.306673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:38.848 [2024-06-09 09:00:01.306697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.848 [2024-06-09 09:00:01.306708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:38.848 [2024-06-09 09:00:01.306714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.848 [2024-06-09 09:00:01.307146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:38.848 [2024-06-09 09:00:01.307153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:38.848 [2024-06-09 09:00:01.307163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:38.848 [2024-06-09 09:00:01.307171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:38.848 [2024-06-09 09:00:01.307616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:38.848 [2024-06-09 09:00:01.307624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:38.848 [2024-06-09 09:00:01.307633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:38.848 [2024-06-09 09:00:01.307639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:38.848 [2024-06-09 09:00:01.308081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:38.849 [2024-06-09 09:00:01.308089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:38.849 [2024-06-09 09:00:01.308098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:38.849 [2024-06-09 09:00:01.308104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:38.849 passed 00:21:38.849 Test: blockdev nvme passthru rw ...passed 00:21:38.849 Test: blockdev nvme passthru vendor specific ...[2024-06-09 09:00:01.393152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:38.849 [2024-06-09 09:00:01.393162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:38.849 [2024-06-09 09:00:01.393356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:38.849 [2024-06-09 09:00:01.393362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:38.849 [2024-06-09 09:00:01.393581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:38.849 [2024-06-09 09:00:01.393589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:38.849 [2024-06-09 09:00:01.393815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:38.849 [2024-06-09 09:00:01.393823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:38.849 passed 00:21:39.110 Test: blockdev nvme admin passthru ...passed 00:21:39.110 Test: blockdev copy ...passed 00:21:39.110 00:21:39.110 Run Summary: Type Total Ran Passed Failed Inactive 00:21:39.110 suites 1 1 n/a 0 0 00:21:39.110 tests 23 23 23 0 0 00:21:39.110 asserts 152 152 152 0 n/a 00:21:39.110 00:21:39.110 Elapsed time = 1.440 seconds 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.372 rmmod nvme_tcp 00:21:39.372 rmmod nvme_fabrics 00:21:39.372 rmmod nvme_keyring 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2622350 ']' 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2622350 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 2622350 ']' 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 2622350 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2622350 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2622350' 00:21:39.372 killing process with pid 2622350 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 2622350 00:21:39.372 09:00:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 2622350 00:21:39.632 09:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:39.632 09:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:39.632 09:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:39.632 09:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.632 09:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:39.632 09:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.632 09:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.632 09:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.179 09:00:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.179 00:21:42.179 real 0m12.012s 00:21:42.179 user 0m15.208s 00:21:42.179 sys 0m6.070s 00:21:42.179 09:00:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:42.179 09:00:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:42.179 ************************************ 00:21:42.179 END TEST nvmf_bdevio_no_huge 00:21:42.179 ************************************ 00:21:42.179 09:00:04 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:42.179 09:00:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:42.179 09:00:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:42.179 09:00:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:42.179 ************************************ 00:21:42.179 START TEST nvmf_tls 00:21:42.180 ************************************ 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:42.180 * Looking for test storage... 00:21:42.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.180 09:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:48.771 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:48.771 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:48.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:48.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.771 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.032 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.032 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.032 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.032 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.032 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.032 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:21:49.293 00:21:49.293 --- 10.0.0.2 ping statistics --- 00:21:49.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.293 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:21:49.293 00:21:49.293 --- 10.0.0.1 ping statistics --- 00:21:49.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.293 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2627595 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2627595 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2627595 ']' 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:49.293 09:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.293 [2024-06-09 09:00:11.710693] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:21:49.293 [2024-06-09 09:00:11.710754] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.293 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.293 [2024-06-09 09:00:11.796735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.554 [2024-06-09 09:00:11.890094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.554 [2024-06-09 09:00:11.890152] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.554 [2024-06-09 09:00:11.890160] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.554 [2024-06-09 09:00:11.890167] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.554 [2024-06-09 09:00:11.890173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.554 [2024-06-09 09:00:11.890205] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.126 09:00:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:50.126 09:00:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:50.126 09:00:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.126 09:00:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:50.126 09:00:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.126 09:00:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.126 09:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:50.126 09:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:50.388 true 00:21:50.388 09:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:50.388 09:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:50.388 09:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:50.388 09:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:50.388 09:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:50.651 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:50.651 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:50.916 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:50.916 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:50.916 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:50.916 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:50.916 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:51.177 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:51.177 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:51.177 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:51.177 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:51.177 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:51.177 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:51.177 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:51.438 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:51.438 09:00:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:51.699 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:51.699 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:51.699 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:51.699 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:51.699 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:51.960 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.30lzoywImQ 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.7FS8Zw6pgO 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.30lzoywImQ 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7FS8Zw6pgO 00:21:51.961 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:52.230 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:52.495 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.30lzoywImQ 00:21:52.495 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.30lzoywImQ 00:21:52.495 09:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:52.754 [2024-06-09 09:00:15.091638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.754 09:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:52.754 09:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:53.015 [2024-06-09 09:00:15.384341] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.015 [2024-06-09 09:00:15.384523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.015 09:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:53.015 malloc0 00:21:53.015 09:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:53.275 09:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.30lzoywImQ 00:21:53.275 [2024-06-09 09:00:15.831384] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:53.535 09:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.30lzoywImQ 00:21:53.535 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.570 Initializing NVMe Controllers 00:22:03.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.570 Initialization complete. Launching workers. 00:22:03.570 ======================================================== 00:22:03.570 Latency(us) 00:22:03.570 Device Information : IOPS MiB/s Average min max 00:22:03.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19154.16 74.82 3341.31 1104.12 5024.38 00:22:03.570 ======================================================== 00:22:03.570 Total : 19154.16 74.82 3341.31 1104.12 5024.38 00:22:03.570 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.30lzoywImQ 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.30lzoywImQ' 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2630340 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2630340 /var/tmp/bdevperf.sock 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2630340 ']' 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:03.570 09:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.570 [2024-06-09 09:00:25.997251] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:03.570 [2024-06-09 09:00:25.997307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630340 ] 00:22:03.570 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.570 [2024-06-09 09:00:26.047109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.570 [2024-06-09 09:00:26.099216] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.512 09:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:04.512 09:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:04.512 09:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.30lzoywImQ 00:22:04.512 [2024-06-09 09:00:26.899812] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.512 [2024-06-09 09:00:26.899873] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.512 TLSTESTn1 00:22:04.512 09:00:27 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:04.777 Running I/O for 10 seconds... 00:22:14.779 00:22:14.779 Latency(us) 00:22:14.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.779 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:14.779 Verification LBA range: start 0x0 length 0x2000 00:22:14.779 TLSTESTn1 : 10.08 1910.26 7.46 0.00 0.00 66773.99 6225.92 180005.55 00:22:14.780 =================================================================================================================== 00:22:14.780 Total : 1910.26 7.46 0.00 0.00 66773.99 6225.92 180005.55 00:22:14.780 0 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2630340 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2630340 ']' 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2630340 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2630340 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2630340' 00:22:14.780 killing process with pid 2630340 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2630340 00:22:14.780 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.780 00:22:14.780 Latency(us) 00:22:14.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.780 =================================================================================================================== 00:22:14.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.780 [2024-06-09 09:00:37.262982] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:14.780 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2630340 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7FS8Zw6pgO 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7FS8Zw6pgO 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7FS8Zw6pgO 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7FS8Zw6pgO' 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2632562 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2632562 /var/tmp/bdevperf.sock 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2632562 ']' 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:15.039 09:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.039 [2024-06-09 09:00:37.426900] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:15.039 [2024-06-09 09:00:37.426952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632562 ] 00:22:15.039 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.039 [2024-06-09 09:00:37.476038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.039 [2024-06-09 09:00:37.527694] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.980 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:15.980 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:15.980 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7FS8Zw6pgO 00:22:15.980 [2024-06-09 09:00:38.312525] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.980 [2024-06-09 09:00:38.312583] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:15.980 [2024-06-09 09:00:38.320110] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:15.980 [2024-06-09 09:00:38.320636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe77960 (107): Transport endpoint is not connected 00:22:15.981 [2024-06-09 09:00:38.321631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe77960 (9): Bad file descriptor 00:22:15.981 [2024-06-09 09:00:38.322632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:15.981 [2024-06-09 09:00:38.322639] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:15.981 [2024-06-09 09:00:38.322645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.981 request: 00:22:15.981 { 00:22:15.981 "name": "TLSTEST", 00:22:15.981 "trtype": "tcp", 00:22:15.981 "traddr": "10.0.0.2", 00:22:15.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.981 "adrfam": "ipv4", 00:22:15.981 "trsvcid": "4420", 00:22:15.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.981 "psk": "/tmp/tmp.7FS8Zw6pgO", 00:22:15.981 "method": "bdev_nvme_attach_controller", 00:22:15.981 "req_id": 1 00:22:15.981 } 00:22:15.981 Got JSON-RPC error response 00:22:15.981 response: 00:22:15.981 { 00:22:15.981 "code": -5, 00:22:15.981 "message": "Input/output error" 00:22:15.981 } 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2632562 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2632562 ']' 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2632562 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2632562 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2632562' 00:22:15.981 killing process with pid 2632562 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2632562 00:22:15.981 Received shutdown signal, test time was about 10.000000 seconds 00:22:15.981 00:22:15.981 Latency(us) 00:22:15.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.981 =================================================================================================================== 00:22:15.981 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:15.981 [2024-06-09 09:00:38.391968] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2632562 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.30lzoywImQ 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.30lzoywImQ 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.30lzoywImQ 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.30lzoywImQ' 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2632696 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2632696 /var/tmp/bdevperf.sock 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2632696 ']' 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:15.981 09:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.241 [2024-06-09 09:00:38.545007] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:16.241 [2024-06-09 09:00:38.545061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632696 ] 00:22:16.241 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.241 [2024-06-09 09:00:38.595030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.241 [2024-06-09 09:00:38.646076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.810 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:16.810 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:16.810 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.30lzoywImQ 00:22:17.081 [2024-06-09 09:00:39.458936] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.081 [2024-06-09 09:00:39.459004] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:17.081 [2024-06-09 09:00:39.463253] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:17.081 [2024-06-09 09:00:39.463270] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:17.081 [2024-06-09 09:00:39.463289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:17.081 [2024-06-09 09:00:39.464072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1524960 (107): Transport endpoint is not connected 00:22:17.081 [2024-06-09 09:00:39.465067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1524960 (9): Bad file descriptor 00:22:17.081 [2024-06-09 09:00:39.466068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:17.081 [2024-06-09 09:00:39.466074] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:17.081 [2024-06-09 09:00:39.466080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.081 request: 00:22:17.081 { 00:22:17.081 "name": "TLSTEST", 00:22:17.081 "trtype": "tcp", 00:22:17.081 "traddr": "10.0.0.2", 00:22:17.081 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:17.081 "adrfam": "ipv4", 00:22:17.081 "trsvcid": "4420", 00:22:17.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.081 "psk": "/tmp/tmp.30lzoywImQ", 00:22:17.081 "method": "bdev_nvme_attach_controller", 00:22:17.081 "req_id": 1 00:22:17.081 } 00:22:17.081 Got JSON-RPC error response 00:22:17.081 response: 00:22:17.081 { 00:22:17.081 "code": -5, 00:22:17.081 "message": "Input/output error" 00:22:17.081 } 00:22:17.081 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2632696 00:22:17.081 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2632696 ']' 00:22:17.081 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2632696 00:22:17.081 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:17.082 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:17.082 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2632696 00:22:17.082 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:17.082 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:17.082 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2632696' 00:22:17.082 killing process with pid 2632696 00:22:17.082 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2632696 00:22:17.082 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.082 00:22:17.082 Latency(us) 00:22:17.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.082 =================================================================================================================== 00:22:17.082 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:17.082 [2024-06-09 09:00:39.552008] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:17.082 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2632696 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.30lzoywImQ 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.30lzoywImQ 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.30lzoywImQ 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.30lzoywImQ' 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2633030 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2633030 /var/tmp/bdevperf.sock 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2633030 ']' 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:17.342 09:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.342 [2024-06-09 09:00:39.716784] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:17.342 [2024-06-09 09:00:39.716851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633030 ] 00:22:17.342 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.342 [2024-06-09 09:00:39.766938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.342 [2024-06-09 09:00:39.817343] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.283 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:18.283 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:18.283 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.30lzoywImQ 00:22:18.283 [2024-06-09 09:00:40.622255] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:18.284 [2024-06-09 09:00:40.622320] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:18.284 [2024-06-09 09:00:40.626728] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:18.284 [2024-06-09 09:00:40.626746] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:18.284 [2024-06-09 09:00:40.626765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:18.284 [2024-06-09 09:00:40.627406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5d960 (107): Transport endpoint is not connected 00:22:18.284 [2024-06-09 09:00:40.628395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5d960 (9): Bad file descriptor 00:22:18.284 [2024-06-09 09:00:40.629397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:18.284 [2024-06-09 09:00:40.629407] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:18.284 [2024-06-09 09:00:40.629414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:18.284 request: 00:22:18.284 { 00:22:18.284 "name": "TLSTEST", 00:22:18.284 "trtype": "tcp", 00:22:18.284 "traddr": "10.0.0.2", 00:22:18.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.284 "adrfam": "ipv4", 00:22:18.284 "trsvcid": "4420", 00:22:18.284 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.284 "psk": "/tmp/tmp.30lzoywImQ", 00:22:18.284 "method": "bdev_nvme_attach_controller", 00:22:18.284 "req_id": 1 00:22:18.284 } 00:22:18.284 Got JSON-RPC error response 00:22:18.284 response: 00:22:18.284 { 00:22:18.284 "code": -5, 00:22:18.284 "message": "Input/output error" 00:22:18.284 } 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2633030 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2633030 ']' 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2633030 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2633030 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2633030' 00:22:18.284 killing process with pid 2633030 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2633030 00:22:18.284 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.284 00:22:18.284 Latency(us) 00:22:18.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.284 =================================================================================================================== 00:22:18.284 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:18.284 [2024-06-09 09:00:40.715010] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2633030 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2633248 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2633248 /var/tmp/bdevperf.sock 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2633248 ']' 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:18.284 09:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.544 [2024-06-09 09:00:40.871379] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:18.544 [2024-06-09 09:00:40.871436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633248 ] 00:22:18.544 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.544 [2024-06-09 09:00:40.919903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.544 [2024-06-09 09:00:40.971433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.114 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:19.114 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:19.114 09:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:19.374 [2024-06-09 09:00:41.798526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:19.374 [2024-06-09 09:00:41.800063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2351330 (9): Bad file descriptor 00:22:19.374 [2024-06-09 09:00:41.801062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:19.374 [2024-06-09 09:00:41.801069] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:19.374 [2024-06-09 09:00:41.801076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.374 request: 00:22:19.374 { 00:22:19.374 "name": "TLSTEST", 00:22:19.374 "trtype": "tcp", 00:22:19.374 "traddr": "10.0.0.2", 00:22:19.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.374 "adrfam": "ipv4", 00:22:19.374 "trsvcid": "4420", 00:22:19.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.374 "method": "bdev_nvme_attach_controller", 00:22:19.374 "req_id": 1 00:22:19.374 } 00:22:19.374 Got JSON-RPC error response 00:22:19.374 response: 00:22:19.374 { 00:22:19.374 "code": -5, 00:22:19.374 "message": "Input/output error" 00:22:19.374 } 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2633248 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2633248 ']' 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2633248 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2633248 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2633248' 00:22:19.374 killing process with pid 2633248 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2633248 00:22:19.374 Received shutdown signal, test time was about 10.000000 seconds 00:22:19.374 00:22:19.374 Latency(us) 00:22:19.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.374 =================================================================================================================== 00:22:19.374 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:19.374 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2633248 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2627595 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2627595 ']' 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2627595 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:19.635 09:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2627595 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2627595' 00:22:19.635 killing process with pid 2627595 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2627595 00:22:19.635 [2024-06-09 09:00:42.045237] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2627595 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:19.635 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.k4Na6MXcvk 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.k4Na6MXcvk 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2633461 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2633461 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2633461 ']' 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:19.895 09:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.895 [2024-06-09 09:00:42.273646] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:19.895 [2024-06-09 09:00:42.273698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.895 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.895 [2024-06-09 09:00:42.352484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.895 [2024-06-09 09:00:42.404847] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.895 [2024-06-09 09:00:42.404882] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.895 [2024-06-09 09:00:42.404887] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.895 [2024-06-09 09:00:42.404892] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.895 [2024-06-09 09:00:42.404896] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.895 [2024-06-09 09:00:42.404914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.476 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:20.477 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:20.477 09:00:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.477 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:20.477 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.742 09:00:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.742 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.k4Na6MXcvk 00:22:20.742 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k4Na6MXcvk 00:22:20.742 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:20.742 [2024-06-09 09:00:43.206644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.742 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:21.003 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:21.003 [2024-06-09 09:00:43.487325] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.003 [2024-06-09 09:00:43.487498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.003 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:21.264 malloc0 00:22:21.264 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:21.264 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k4Na6MXcvk 00:22:21.526 [2024-06-09 09:00:43.934185] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k4Na6MXcvk 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.k4Na6MXcvk' 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2633861 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2633861 /var/tmp/bdevperf.sock 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2633861 ']' 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:21.526 09:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.526 [2024-06-09 09:00:43.996152] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:21.526 [2024-06-09 09:00:43.996204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633861 ] 00:22:21.526 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.526 [2024-06-09 09:00:44.045662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.786 [2024-06-09 09:00:44.097598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.419 09:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:22.420 09:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:22.420 09:00:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k4Na6MXcvk 00:22:22.420 [2024-06-09 09:00:44.906459] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.420 [2024-06-09 09:00:44.906519] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:22.679 TLSTESTn1 00:22:22.679 09:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:22.679 Running I/O for 10 seconds... 00:22:32.679 00:22:32.679 Latency(us) 00:22:32.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.679 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.679 Verification LBA range: start 0x0 length 0x2000 00:22:32.679 TLSTESTn1 : 10.07 1963.12 7.67 0.00 0.00 64973.13 6144.00 148548.27 00:22:32.679 =================================================================================================================== 00:22:32.679 Total : 1963.12 7.67 0.00 0.00 64973.13 6144.00 148548.27 00:22:32.679 0 00:22:32.679 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.679 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2633861 00:22:32.679 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2633861 ']' 00:22:32.679 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2633861 00:22:32.679 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:32.679 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:32.679 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2633861 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2633861' 00:22:32.940 killing process with pid 2633861 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2633861 00:22:32.940 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.940 00:22:32.940 Latency(us) 00:22:32.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.940 =================================================================================================================== 00:22:32.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.940 [2024-06-09 09:00:55.278762] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2633861 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.k4Na6MXcvk 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k4Na6MXcvk 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k4Na6MXcvk 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k4Na6MXcvk 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.k4Na6MXcvk' 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2636109 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2636109 /var/tmp/bdevperf.sock 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2636109 ']' 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:32.940 09:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.940 [2024-06-09 09:00:55.444932] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:32.940 [2024-06-09 09:00:55.444985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636109 ] 00:22:32.940 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.940 [2024-06-09 09:00:55.494604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.201 [2024-06-09 09:00:55.544862] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.776 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:33.776 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:33.776 09:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k4Na6MXcvk 00:22:34.037 [2024-06-09 09:00:56.353722] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.037 [2024-06-09 09:00:56.353768] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:34.037 [2024-06-09 09:00:56.353773] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.k4Na6MXcvk 00:22:34.037 request: 00:22:34.037 { 00:22:34.037 "name": "TLSTEST", 00:22:34.037 "trtype": "tcp", 00:22:34.037 "traddr": "10.0.0.2", 00:22:34.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.037 "adrfam": "ipv4", 00:22:34.037 "trsvcid": "4420", 00:22:34.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.037 "psk": "/tmp/tmp.k4Na6MXcvk", 00:22:34.037 "method": "bdev_nvme_attach_controller", 00:22:34.037 "req_id": 1 00:22:34.037 } 00:22:34.037 Got JSON-RPC error response 00:22:34.037 response: 00:22:34.037 { 00:22:34.037 "code": -1, 00:22:34.037 "message": "Operation not permitted" 00:22:34.037 } 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2636109 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2636109 ']' 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2636109 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2636109 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2636109' 00:22:34.037 killing process with pid 2636109 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2636109 00:22:34.037 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.037 00:22:34.037 Latency(us) 00:22:34.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.037 =================================================================================================================== 00:22:34.037 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2636109 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2633461 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2633461 ']' 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2633461 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:34.037 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2633461 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2633461' 00:22:34.298 killing process with pid 2633461 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2633461 00:22:34.298 [2024-06-09 09:00:56.605095] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2633461 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2636453 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2636453 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2636453 ']' 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:34.298 09:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.298 [2024-06-09 09:00:56.787178] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:34.298 [2024-06-09 09:00:56.787235] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.298 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.558 [2024-06-09 09:00:56.867779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.558 [2024-06-09 09:00:56.922051] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.558 [2024-06-09 09:00:56.922082] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.558 [2024-06-09 09:00:56.922087] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.558 [2024-06-09 09:00:56.922092] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.558 [2024-06-09 09:00:56.922096] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.558 [2024-06-09 09:00:56.922111] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.k4Na6MXcvk 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.k4Na6MXcvk 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.k4Na6MXcvk 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k4Na6MXcvk 00:22:35.130 09:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.389 [2024-06-09 09:00:57.723447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.390 09:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:35.390 09:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:35.650 [2024-06-09 09:00:58.032201] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.650 [2024-06-09 09:00:58.032374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.650 09:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:35.650 malloc0 00:22:35.650 09:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:35.911 09:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k4Na6MXcvk 00:22:35.911 [2024-06-09 09:00:58.467173] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:35.911 [2024-06-09 09:00:58.467190] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:35.911 [2024-06-09 09:00:58.467208] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:36.172 request: 00:22:36.172 { 00:22:36.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.172 "host": "nqn.2016-06.io.spdk:host1", 00:22:36.172 "psk": "/tmp/tmp.k4Na6MXcvk", 00:22:36.172 "method": "nvmf_subsystem_add_host", 00:22:36.172 "req_id": 1 00:22:36.172 } 00:22:36.172 Got JSON-RPC error response 00:22:36.172 response: 00:22:36.172 { 00:22:36.172 "code": -32603, 00:22:36.172 "message": "Internal error" 00:22:36.172 } 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2636453 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2636453 ']' 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2636453 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2636453 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2636453' 00:22:36.172 killing process with pid 2636453 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2636453 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2636453 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.k4Na6MXcvk 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2636824 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2636824 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2636824 ']' 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:36.172 09:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.172 [2024-06-09 09:00:58.724331] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:36.172 [2024-06-09 09:00:58.724383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.432 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.432 [2024-06-09 09:00:58.802862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.432 [2024-06-09 09:00:58.854851] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.432 [2024-06-09 09:00:58.854886] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.433 [2024-06-09 09:00:58.854892] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.433 [2024-06-09 09:00:58.854897] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.433 [2024-06-09 09:00:58.854901] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.433 [2024-06-09 09:00:58.854917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.k4Na6MXcvk 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k4Na6MXcvk 00:22:37.004 09:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:37.264 [2024-06-09 09:00:59.696447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.264 09:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:37.525 09:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:37.525 [2024-06-09 09:01:00.017232] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:37.525 [2024-06-09 09:01:00.017429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.525 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:37.786 malloc0 00:22:37.786 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:37.786 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k4Na6MXcvk 00:22:38.046 [2024-06-09 09:01:00.464090] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2637183 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2637183 /var/tmp/bdevperf.sock 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2637183 ']' 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:38.046 09:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.046 [2024-06-09 09:01:00.509861] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:38.046 [2024-06-09 09:01:00.509918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637183 ] 00:22:38.046 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.046 [2024-06-09 09:01:00.565227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.306 [2024-06-09 09:01:00.616876] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.306 09:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:38.306 09:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:38.306 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k4Na6MXcvk 00:22:38.306 [2024-06-09 09:01:00.836203] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.306 [2024-06-09 09:01:00.836268] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:38.566 TLSTESTn1 00:22:38.566 09:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:38.827 09:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:38.827 "subsystems": [ 00:22:38.827 { 00:22:38.827 "subsystem": "keyring", 00:22:38.827 "config": [] 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "subsystem": "iobuf", 00:22:38.827 "config": [ 00:22:38.827 { 00:22:38.827 "method": "iobuf_set_options", 00:22:38.827 "params": { 00:22:38.827 "small_pool_count": 8192, 00:22:38.827 "large_pool_count": 1024, 00:22:38.827 "small_bufsize": 8192, 00:22:38.827 "large_bufsize": 135168 00:22:38.827 } 00:22:38.827 } 00:22:38.827 ] 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "subsystem": "sock", 00:22:38.827 "config": [ 00:22:38.827 { 00:22:38.827 "method": "sock_set_default_impl", 00:22:38.827 "params": { 00:22:38.827 "impl_name": "posix" 00:22:38.827 } 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "method": "sock_impl_set_options", 00:22:38.827 "params": { 00:22:38.827 "impl_name": "ssl", 00:22:38.827 "recv_buf_size": 4096, 00:22:38.827 "send_buf_size": 4096, 00:22:38.827 "enable_recv_pipe": true, 00:22:38.827 "enable_quickack": false, 00:22:38.827 "enable_placement_id": 0, 00:22:38.827 "enable_zerocopy_send_server": true, 00:22:38.827 "enable_zerocopy_send_client": false, 00:22:38.827 "zerocopy_threshold": 0, 00:22:38.827 "tls_version": 0, 00:22:38.827 "enable_ktls": false 00:22:38.827 } 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "method": "sock_impl_set_options", 00:22:38.827 "params": { 00:22:38.827 "impl_name": "posix", 00:22:38.827 "recv_buf_size": 2097152, 00:22:38.827 "send_buf_size": 2097152, 00:22:38.827 "enable_recv_pipe": true, 00:22:38.827 "enable_quickack": false, 00:22:38.827 "enable_placement_id": 0, 00:22:38.827 "enable_zerocopy_send_server": true, 00:22:38.827 "enable_zerocopy_send_client": false, 00:22:38.827 "zerocopy_threshold": 0, 00:22:38.827 "tls_version": 0, 00:22:38.827 "enable_ktls": false 00:22:38.827 } 00:22:38.827 } 00:22:38.827 ] 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "subsystem": "vmd", 00:22:38.827 "config": [] 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "subsystem": "accel", 00:22:38.827 "config": [ 00:22:38.827 { 00:22:38.827 "method": "accel_set_options", 00:22:38.827 "params": { 00:22:38.827 "small_cache_size": 128, 00:22:38.827 "large_cache_size": 16, 00:22:38.827 "task_count": 2048, 00:22:38.827 "sequence_count": 2048, 00:22:38.827 "buf_count": 2048 00:22:38.827 } 00:22:38.827 } 00:22:38.827 ] 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "subsystem": "bdev", 00:22:38.827 "config": [ 00:22:38.827 { 00:22:38.827 "method": "bdev_set_options", 00:22:38.827 "params": { 00:22:38.827 "bdev_io_pool_size": 65535, 00:22:38.827 "bdev_io_cache_size": 256, 00:22:38.827 "bdev_auto_examine": true, 00:22:38.827 "iobuf_small_cache_size": 128, 00:22:38.827 "iobuf_large_cache_size": 16 00:22:38.827 } 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "method": "bdev_raid_set_options", 00:22:38.827 "params": { 00:22:38.827 "process_window_size_kb": 1024 00:22:38.827 } 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "method": "bdev_iscsi_set_options", 00:22:38.827 "params": { 00:22:38.827 "timeout_sec": 30 00:22:38.827 } 00:22:38.827 }, 00:22:38.827 { 00:22:38.827 "method": "bdev_nvme_set_options", 00:22:38.827 "params": { 00:22:38.828 "action_on_timeout": "none", 00:22:38.828 "timeout_us": 0, 00:22:38.828 "timeout_admin_us": 0, 00:22:38.828 "keep_alive_timeout_ms": 10000, 00:22:38.828 "arbitration_burst": 0, 00:22:38.828 "low_priority_weight": 0, 00:22:38.828 "medium_priority_weight": 0, 00:22:38.828 "high_priority_weight": 0, 00:22:38.828 "nvme_adminq_poll_period_us": 10000, 00:22:38.828 "nvme_ioq_poll_period_us": 0, 00:22:38.828 "io_queue_requests": 0, 00:22:38.828 "delay_cmd_submit": true, 00:22:38.828 "transport_retry_count": 4, 00:22:38.828 "bdev_retry_count": 3, 00:22:38.828 "transport_ack_timeout": 0, 00:22:38.828 "ctrlr_loss_timeout_sec": 0, 00:22:38.828 "reconnect_delay_sec": 0, 00:22:38.828 "fast_io_fail_timeout_sec": 0, 00:22:38.828 "disable_auto_failback": false, 00:22:38.828 "generate_uuids": false, 00:22:38.828 "transport_tos": 0, 00:22:38.828 "nvme_error_stat": false, 00:22:38.828 "rdma_srq_size": 0, 00:22:38.828 "io_path_stat": false, 00:22:38.828 "allow_accel_sequence": false, 00:22:38.828 "rdma_max_cq_size": 0, 00:22:38.828 "rdma_cm_event_timeout_ms": 0, 00:22:38.828 "dhchap_digests": [ 00:22:38.828 "sha256", 00:22:38.828 "sha384", 00:22:38.828 "sha512" 00:22:38.828 ], 00:22:38.828 "dhchap_dhgroups": [ 00:22:38.828 "null", 00:22:38.828 "ffdhe2048", 00:22:38.828 "ffdhe3072", 00:22:38.828 "ffdhe4096", 00:22:38.828 "ffdhe6144", 00:22:38.828 "ffdhe8192" 00:22:38.828 ] 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "bdev_nvme_set_hotplug", 00:22:38.828 "params": { 00:22:38.828 "period_us": 100000, 00:22:38.828 "enable": false 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "bdev_malloc_create", 00:22:38.828 "params": { 00:22:38.828 "name": "malloc0", 00:22:38.828 "num_blocks": 8192, 00:22:38.828 "block_size": 4096, 00:22:38.828 "physical_block_size": 4096, 00:22:38.828 "uuid": "0af1e551-454c-4da6-95ee-b52a253320bb", 00:22:38.828 "optimal_io_boundary": 0 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "bdev_wait_for_examine" 00:22:38.828 } 00:22:38.828 ] 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "subsystem": "nbd", 00:22:38.828 "config": [] 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "subsystem": "scheduler", 00:22:38.828 "config": [ 00:22:38.828 { 00:22:38.828 "method": "framework_set_scheduler", 00:22:38.828 "params": { 00:22:38.828 "name": "static" 00:22:38.828 } 00:22:38.828 } 00:22:38.828 ] 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "subsystem": "nvmf", 00:22:38.828 "config": [ 00:22:38.828 { 00:22:38.828 "method": "nvmf_set_config", 00:22:38.828 "params": { 00:22:38.828 "discovery_filter": "match_any", 00:22:38.828 "admin_cmd_passthru": { 00:22:38.828 "identify_ctrlr": false 00:22:38.828 } 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "nvmf_set_max_subsystems", 00:22:38.828 "params": { 00:22:38.828 "max_subsystems": 1024 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "nvmf_set_crdt", 00:22:38.828 "params": { 00:22:38.828 "crdt1": 0, 00:22:38.828 "crdt2": 0, 00:22:38.828 "crdt3": 0 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "nvmf_create_transport", 00:22:38.828 "params": { 00:22:38.828 "trtype": "TCP", 00:22:38.828 "max_queue_depth": 128, 00:22:38.828 "max_io_qpairs_per_ctrlr": 127, 00:22:38.828 "in_capsule_data_size": 4096, 00:22:38.828 "max_io_size": 131072, 00:22:38.828 "io_unit_size": 131072, 00:22:38.828 "max_aq_depth": 128, 00:22:38.828 "num_shared_buffers": 511, 00:22:38.828 "buf_cache_size": 4294967295, 00:22:38.828 "dif_insert_or_strip": false, 00:22:38.828 "zcopy": false, 00:22:38.828 "c2h_success": false, 00:22:38.828 "sock_priority": 0, 00:22:38.828 "abort_timeout_sec": 1, 00:22:38.828 "ack_timeout": 0, 00:22:38.828 "data_wr_pool_size": 0 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "nvmf_create_subsystem", 00:22:38.828 "params": { 00:22:38.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.828 "allow_any_host": false, 00:22:38.828 "serial_number": "SPDK00000000000001", 00:22:38.828 "model_number": "SPDK bdev Controller", 00:22:38.828 "max_namespaces": 10, 00:22:38.828 "min_cntlid": 1, 00:22:38.828 "max_cntlid": 65519, 00:22:38.828 "ana_reporting": false 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "nvmf_subsystem_add_host", 00:22:38.828 "params": { 00:22:38.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.828 "host": "nqn.2016-06.io.spdk:host1", 00:22:38.828 "psk": "/tmp/tmp.k4Na6MXcvk" 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "nvmf_subsystem_add_ns", 00:22:38.828 "params": { 00:22:38.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.828 "namespace": { 00:22:38.828 "nsid": 1, 00:22:38.828 "bdev_name": "malloc0", 00:22:38.828 "nguid": "0AF1E551454C4DA695EEB52A253320BB", 00:22:38.828 "uuid": "0af1e551-454c-4da6-95ee-b52a253320bb", 00:22:38.828 "no_auto_visible": false 00:22:38.828 } 00:22:38.828 } 00:22:38.828 }, 00:22:38.828 { 00:22:38.828 "method": "nvmf_subsystem_add_listener", 00:22:38.828 "params": { 00:22:38.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.828 "listen_address": { 00:22:38.828 "trtype": "TCP", 00:22:38.828 "adrfam": "IPv4", 00:22:38.828 "traddr": "10.0.0.2", 00:22:38.828 "trsvcid": "4420" 00:22:38.828 }, 00:22:38.828 "secure_channel": true 00:22:38.828 } 00:22:38.828 } 00:22:38.828 ] 00:22:38.828 } 00:22:38.828 ] 00:22:38.828 }' 00:22:38.828 09:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:39.089 09:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:39.089 "subsystems": [ 00:22:39.089 { 00:22:39.089 "subsystem": "keyring", 00:22:39.089 "config": [] 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "subsystem": "iobuf", 00:22:39.089 "config": [ 00:22:39.089 { 00:22:39.089 "method": "iobuf_set_options", 00:22:39.089 "params": { 00:22:39.089 "small_pool_count": 8192, 00:22:39.089 "large_pool_count": 1024, 00:22:39.089 "small_bufsize": 8192, 00:22:39.089 "large_bufsize": 135168 00:22:39.089 } 00:22:39.089 } 00:22:39.089 ] 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "subsystem": "sock", 00:22:39.089 "config": [ 00:22:39.089 { 00:22:39.089 "method": "sock_set_default_impl", 00:22:39.089 "params": { 00:22:39.089 "impl_name": "posix" 00:22:39.089 } 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "method": "sock_impl_set_options", 00:22:39.089 "params": { 00:22:39.089 "impl_name": "ssl", 00:22:39.089 "recv_buf_size": 4096, 00:22:39.089 "send_buf_size": 4096, 00:22:39.089 "enable_recv_pipe": true, 00:22:39.089 "enable_quickack": false, 00:22:39.089 "enable_placement_id": 0, 00:22:39.089 "enable_zerocopy_send_server": true, 00:22:39.089 "enable_zerocopy_send_client": false, 00:22:39.089 "zerocopy_threshold": 0, 00:22:39.089 "tls_version": 0, 00:22:39.089 "enable_ktls": false 00:22:39.089 } 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "method": "sock_impl_set_options", 00:22:39.089 "params": { 00:22:39.089 "impl_name": "posix", 00:22:39.089 "recv_buf_size": 2097152, 00:22:39.089 "send_buf_size": 2097152, 00:22:39.089 "enable_recv_pipe": true, 00:22:39.089 "enable_quickack": false, 00:22:39.089 "enable_placement_id": 0, 00:22:39.089 "enable_zerocopy_send_server": true, 00:22:39.089 "enable_zerocopy_send_client": false, 00:22:39.089 "zerocopy_threshold": 0, 00:22:39.089 "tls_version": 0, 00:22:39.089 "enable_ktls": false 00:22:39.089 } 00:22:39.089 } 00:22:39.089 ] 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "subsystem": "vmd", 00:22:39.089 "config": [] 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "subsystem": "accel", 00:22:39.089 "config": [ 00:22:39.089 { 00:22:39.089 "method": "accel_set_options", 00:22:39.089 "params": { 00:22:39.089 "small_cache_size": 128, 00:22:39.089 "large_cache_size": 16, 00:22:39.089 "task_count": 2048, 00:22:39.089 "sequence_count": 2048, 00:22:39.089 "buf_count": 2048 00:22:39.089 } 00:22:39.089 } 00:22:39.089 ] 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "subsystem": "bdev", 00:22:39.089 "config": [ 00:22:39.089 { 00:22:39.089 "method": "bdev_set_options", 00:22:39.089 "params": { 00:22:39.089 "bdev_io_pool_size": 65535, 00:22:39.089 "bdev_io_cache_size": 256, 00:22:39.089 "bdev_auto_examine": true, 00:22:39.089 "iobuf_small_cache_size": 128, 00:22:39.089 "iobuf_large_cache_size": 16 00:22:39.089 } 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "method": "bdev_raid_set_options", 00:22:39.089 "params": { 00:22:39.089 "process_window_size_kb": 1024 00:22:39.089 } 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "method": "bdev_iscsi_set_options", 00:22:39.089 "params": { 00:22:39.089 "timeout_sec": 30 00:22:39.089 } 00:22:39.089 }, 00:22:39.089 { 00:22:39.089 "method": "bdev_nvme_set_options", 00:22:39.089 "params": { 00:22:39.089 "action_on_timeout": "none", 00:22:39.089 "timeout_us": 0, 00:22:39.089 "timeout_admin_us": 0, 00:22:39.089 "keep_alive_timeout_ms": 10000, 00:22:39.089 "arbitration_burst": 0, 00:22:39.089 "low_priority_weight": 0, 00:22:39.089 "medium_priority_weight": 0, 00:22:39.089 "high_priority_weight": 0, 00:22:39.089 "nvme_adminq_poll_period_us": 10000, 00:22:39.089 "nvme_ioq_poll_period_us": 0, 00:22:39.090 "io_queue_requests": 512, 00:22:39.090 "delay_cmd_submit": true, 00:22:39.090 "transport_retry_count": 4, 00:22:39.090 "bdev_retry_count": 3, 00:22:39.090 "transport_ack_timeout": 0, 00:22:39.090 "ctrlr_loss_timeout_sec": 0, 00:22:39.090 "reconnect_delay_sec": 0, 00:22:39.090 "fast_io_fail_timeout_sec": 0, 00:22:39.090 "disable_auto_failback": false, 00:22:39.090 "generate_uuids": false, 00:22:39.090 "transport_tos": 0, 00:22:39.090 "nvme_error_stat": false, 00:22:39.090 "rdma_srq_size": 0, 00:22:39.090 "io_path_stat": false, 00:22:39.090 "allow_accel_sequence": false, 00:22:39.090 "rdma_max_cq_size": 0, 00:22:39.090 "rdma_cm_event_timeout_ms": 0, 00:22:39.090 "dhchap_digests": [ 00:22:39.090 "sha256", 00:22:39.090 "sha384", 00:22:39.090 "sha512" 00:22:39.090 ], 00:22:39.090 "dhchap_dhgroups": [ 00:22:39.090 "null", 00:22:39.090 "ffdhe2048", 00:22:39.090 "ffdhe3072", 00:22:39.090 "ffdhe4096", 00:22:39.090 "ffdhe6144", 00:22:39.090 "ffdhe8192" 00:22:39.090 ] 00:22:39.090 } 00:22:39.090 }, 00:22:39.090 { 00:22:39.090 "method": "bdev_nvme_attach_controller", 00:22:39.090 "params": { 00:22:39.090 "name": "TLSTEST", 00:22:39.090 "trtype": "TCP", 00:22:39.090 "adrfam": "IPv4", 00:22:39.090 "traddr": "10.0.0.2", 00:22:39.090 "trsvcid": "4420", 00:22:39.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.090 "prchk_reftag": false, 00:22:39.090 "prchk_guard": false, 00:22:39.090 "ctrlr_loss_timeout_sec": 0, 00:22:39.090 "reconnect_delay_sec": 0, 00:22:39.090 "fast_io_fail_timeout_sec": 0, 00:22:39.090 "psk": "/tmp/tmp.k4Na6MXcvk", 00:22:39.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.090 "hdgst": false, 00:22:39.090 "ddgst": false 00:22:39.090 } 00:22:39.090 }, 00:22:39.090 { 00:22:39.090 "method": "bdev_nvme_set_hotplug", 00:22:39.090 "params": { 00:22:39.090 "period_us": 100000, 00:22:39.090 "enable": false 00:22:39.090 } 00:22:39.090 }, 00:22:39.090 { 00:22:39.090 "method": "bdev_wait_for_examine" 00:22:39.090 } 00:22:39.090 ] 00:22:39.090 }, 00:22:39.090 { 00:22:39.090 "subsystem": "nbd", 00:22:39.090 "config": [] 00:22:39.090 } 00:22:39.090 ] 00:22:39.090 }' 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2637183 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2637183 ']' 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2637183 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2637183 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2637183' 00:22:39.090 killing process with pid 2637183 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2637183 00:22:39.090 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.090 00:22:39.090 Latency(us) 00:22:39.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.090 =================================================================================================================== 00:22:39.090 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.090 [2024-06-09 09:01:01.480661] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2637183 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2636824 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2636824 ']' 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2636824 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2636824 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2636824' 00:22:39.090 killing process with pid 2636824 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2636824 00:22:39.090 [2024-06-09 09:01:01.646963] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:39.090 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2636824 00:22:39.393 09:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:39.393 09:01:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.393 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:39.393 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.393 09:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:39.393 "subsystems": [ 00:22:39.393 { 00:22:39.393 "subsystem": "keyring", 00:22:39.393 "config": [] 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "subsystem": "iobuf", 00:22:39.393 "config": [ 00:22:39.393 { 00:22:39.393 "method": "iobuf_set_options", 00:22:39.393 "params": { 00:22:39.393 "small_pool_count": 8192, 00:22:39.393 "large_pool_count": 1024, 00:22:39.393 "small_bufsize": 8192, 00:22:39.393 "large_bufsize": 135168 00:22:39.393 } 00:22:39.393 } 00:22:39.393 ] 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "subsystem": "sock", 00:22:39.393 "config": [ 00:22:39.393 { 00:22:39.393 "method": "sock_set_default_impl", 00:22:39.393 "params": { 00:22:39.393 "impl_name": "posix" 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "sock_impl_set_options", 00:22:39.393 "params": { 00:22:39.393 "impl_name": "ssl", 00:22:39.393 "recv_buf_size": 4096, 00:22:39.393 "send_buf_size": 4096, 00:22:39.393 "enable_recv_pipe": true, 00:22:39.393 "enable_quickack": false, 00:22:39.393 "enable_placement_id": 0, 00:22:39.393 "enable_zerocopy_send_server": true, 00:22:39.393 "enable_zerocopy_send_client": false, 00:22:39.393 "zerocopy_threshold": 0, 00:22:39.393 "tls_version": 0, 00:22:39.393 "enable_ktls": false 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "sock_impl_set_options", 00:22:39.393 "params": { 00:22:39.393 "impl_name": "posix", 00:22:39.393 "recv_buf_size": 2097152, 00:22:39.393 "send_buf_size": 2097152, 00:22:39.393 "enable_recv_pipe": true, 00:22:39.393 "enable_quickack": false, 00:22:39.393 "enable_placement_id": 0, 00:22:39.393 "enable_zerocopy_send_server": true, 00:22:39.393 "enable_zerocopy_send_client": false, 00:22:39.393 "zerocopy_threshold": 0, 00:22:39.393 "tls_version": 0, 00:22:39.393 "enable_ktls": false 00:22:39.393 } 00:22:39.393 } 00:22:39.393 ] 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "subsystem": "vmd", 00:22:39.393 "config": [] 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "subsystem": "accel", 00:22:39.393 "config": [ 00:22:39.393 { 00:22:39.393 "method": "accel_set_options", 00:22:39.393 "params": { 00:22:39.393 "small_cache_size": 128, 00:22:39.393 "large_cache_size": 16, 00:22:39.393 "task_count": 2048, 00:22:39.393 "sequence_count": 2048, 00:22:39.393 "buf_count": 2048 00:22:39.393 } 00:22:39.393 } 00:22:39.393 ] 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "subsystem": "bdev", 00:22:39.393 "config": [ 00:22:39.393 { 00:22:39.393 "method": "bdev_set_options", 00:22:39.393 "params": { 00:22:39.393 "bdev_io_pool_size": 65535, 00:22:39.393 "bdev_io_cache_size": 256, 00:22:39.393 "bdev_auto_examine": true, 00:22:39.393 "iobuf_small_cache_size": 128, 00:22:39.393 "iobuf_large_cache_size": 16 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "bdev_raid_set_options", 00:22:39.393 "params": { 00:22:39.393 "process_window_size_kb": 1024 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "bdev_iscsi_set_options", 00:22:39.393 "params": { 00:22:39.393 "timeout_sec": 30 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "bdev_nvme_set_options", 00:22:39.393 "params": { 00:22:39.393 "action_on_timeout": "none", 00:22:39.393 "timeout_us": 0, 00:22:39.393 "timeout_admin_us": 0, 00:22:39.393 "keep_alive_timeout_ms": 10000, 00:22:39.393 "arbitration_burst": 0, 00:22:39.393 "low_priority_weight": 0, 00:22:39.393 "medium_priority_weight": 0, 00:22:39.393 "high_priority_weight": 0, 00:22:39.393 "nvme_adminq_poll_period_us": 10000, 00:22:39.393 "nvme_ioq_poll_period_us": 0, 00:22:39.393 "io_queue_requests": 0, 00:22:39.393 "delay_cmd_submit": true, 00:22:39.393 "transport_retry_count": 4, 00:22:39.393 "bdev_retry_count": 3, 00:22:39.393 "transport_ack_timeout": 0, 00:22:39.393 "ctrlr_loss_timeout_sec": 0, 00:22:39.393 "reconnect_delay_sec": 0, 00:22:39.393 "fast_io_fail_timeout_sec": 0, 00:22:39.393 "disable_auto_failback": false, 00:22:39.393 "generate_uuids": false, 00:22:39.393 "transport_tos": 0, 00:22:39.393 "nvme_error_stat": false, 00:22:39.393 "rdma_srq_size": 0, 00:22:39.393 "io_path_stat": false, 00:22:39.393 "allow_accel_sequence": false, 00:22:39.393 "rdma_max_cq_size": 0, 00:22:39.393 "rdma_cm_event_timeout_ms": 0, 00:22:39.393 "dhchap_digests": [ 00:22:39.393 "sha256", 00:22:39.393 "sha384", 00:22:39.393 "sha512" 00:22:39.393 ], 00:22:39.393 "dhchap_dhgroups": [ 00:22:39.393 "null", 00:22:39.393 "ffdhe2048", 00:22:39.393 "ffdhe3072", 00:22:39.393 "ffdhe4096", 00:22:39.393 "ffdhe6144", 00:22:39.393 "ffdhe8192" 00:22:39.393 ] 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "bdev_nvme_set_hotplug", 00:22:39.393 "params": { 00:22:39.393 "period_us": 100000, 00:22:39.393 "enable": false 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "bdev_malloc_create", 00:22:39.393 "params": { 00:22:39.393 "name": "malloc0", 00:22:39.393 "num_blocks": 8192, 00:22:39.393 "block_size": 4096, 00:22:39.393 "physical_block_size": 4096, 00:22:39.393 "uuid": "0af1e551-454c-4da6-95ee-b52a253320bb", 00:22:39.393 "optimal_io_boundary": 0 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "bdev_wait_for_examine" 00:22:39.393 } 00:22:39.393 ] 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "subsystem": "nbd", 00:22:39.393 "config": [] 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "subsystem": "scheduler", 00:22:39.393 "config": [ 00:22:39.393 { 00:22:39.393 "method": "framework_set_scheduler", 00:22:39.393 "params": { 00:22:39.393 "name": "static" 00:22:39.393 } 00:22:39.393 } 00:22:39.393 ] 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "subsystem": "nvmf", 00:22:39.393 "config": [ 00:22:39.393 { 00:22:39.393 "method": "nvmf_set_config", 00:22:39.393 "params": { 00:22:39.393 "discovery_filter": "match_any", 00:22:39.393 "admin_cmd_passthru": { 00:22:39.393 "identify_ctrlr": false 00:22:39.393 } 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "nvmf_set_max_subsystems", 00:22:39.393 "params": { 00:22:39.393 "max_subsystems": 1024 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "nvmf_set_crdt", 00:22:39.393 "params": { 00:22:39.393 "crdt1": 0, 00:22:39.393 "crdt2": 0, 00:22:39.393 "crdt3": 0 00:22:39.393 } 00:22:39.393 }, 00:22:39.393 { 00:22:39.393 "method": "nvmf_create_transport", 00:22:39.393 "params": { 00:22:39.393 "trtype": "TCP", 00:22:39.393 "max_queue_depth": 128, 00:22:39.393 "max_io_qpairs_per_ctrlr": 127, 00:22:39.393 "in_capsule_data_size": 4096, 00:22:39.393 "max_io_size": 131072, 00:22:39.393 "io_unit_size": 131072, 00:22:39.393 "max_aq_depth": 128, 00:22:39.393 "num_shared_buffers": 511, 00:22:39.393 "buf_cache_size": 4294967295, 00:22:39.393 "dif_insert_or_strip": false, 00:22:39.393 "zcopy": false, 00:22:39.393 "c2h_success": false, 00:22:39.393 "sock_priority": 0, 00:22:39.393 "abort_timeout_sec": 1, 00:22:39.394 "ack_timeout": 0, 00:22:39.394 "data_wr_pool_size": 0 00:22:39.394 } 00:22:39.394 }, 00:22:39.394 { 00:22:39.394 "method": "nvmf_create_subsystem", 00:22:39.394 "params": { 00:22:39.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.394 "allow_any_host": false, 00:22:39.394 "serial_number": "SPDK00000000000001", 00:22:39.394 "model_number": "SPDK bdev Controller", 00:22:39.394 "max_namespaces": 10, 00:22:39.394 "min_cntlid": 1, 00:22:39.394 "max_cntlid": 65519, 00:22:39.394 "ana_reporting": false 00:22:39.394 } 00:22:39.394 }, 00:22:39.394 { 00:22:39.394 "method": "nvmf_subsystem_add_host", 00:22:39.394 "params": { 00:22:39.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.394 "host": "nqn.2016-06.io.spdk:host1", 00:22:39.394 "psk": "/tmp/tmp.k4Na6MXcvk" 00:22:39.394 } 00:22:39.394 }, 00:22:39.394 { 00:22:39.394 "method": "nvmf_subsystem_add_ns", 00:22:39.394 "params": { 00:22:39.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.394 "namespace": { 00:22:39.394 "nsid": 1, 00:22:39.394 "bdev_name": "malloc0", 00:22:39.394 "nguid": "0AF1E551454C4DA695EEB52A253320BB", 00:22:39.394 "uuid": "0af1e551-454c-4da6-95ee-b52a253320bb", 00:22:39.394 "no_auto_visible": false 00:22:39.394 } 00:22:39.394 } 00:22:39.394 }, 00:22:39.394 { 00:22:39.394 "method": "nvmf_subsystem_add_listener", 00:22:39.394 "params": { 00:22:39.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.394 "listen_address": { 00:22:39.394 "trtype": "TCP", 00:22:39.394 "adrfam": "IPv4", 00:22:39.394 "traddr": "10.0.0.2", 00:22:39.394 "trsvcid": "4420" 00:22:39.394 }, 00:22:39.394 "secure_channel": true 00:22:39.394 } 00:22:39.394 } 00:22:39.394 ] 00:22:39.394 } 00:22:39.394 ] 00:22:39.394 }' 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2637528 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2637528 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2637528 ']' 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:39.394 09:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.394 [2024-06-09 09:01:01.823359] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:39.394 [2024-06-09 09:01:01.823416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.394 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.394 [2024-06-09 09:01:01.905486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.654 [2024-06-09 09:01:01.959214] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.654 [2024-06-09 09:01:01.959246] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.654 [2024-06-09 09:01:01.959252] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.654 [2024-06-09 09:01:01.959256] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.654 [2024-06-09 09:01:01.959260] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.654 [2024-06-09 09:01:01.959307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.654 [2024-06-09 09:01:02.141921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.654 [2024-06-09 09:01:02.157893] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:39.654 [2024-06-09 09:01:02.173941] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.654 [2024-06-09 09:01:02.183715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.225 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:40.225 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:40.225 09:01:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.225 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:40.225 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2637561 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2637561 /var/tmp/bdevperf.sock 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2637561 ']' 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.226 09:01:02 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:40.226 "subsystems": [ 00:22:40.226 { 00:22:40.226 "subsystem": "keyring", 00:22:40.226 "config": [] 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "subsystem": "iobuf", 00:22:40.226 "config": [ 00:22:40.226 { 00:22:40.226 "method": "iobuf_set_options", 00:22:40.226 "params": { 00:22:40.226 "small_pool_count": 8192, 00:22:40.226 "large_pool_count": 1024, 00:22:40.226 "small_bufsize": 8192, 00:22:40.226 "large_bufsize": 135168 00:22:40.226 } 00:22:40.226 } 00:22:40.226 ] 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "subsystem": "sock", 00:22:40.226 "config": [ 00:22:40.226 { 00:22:40.226 "method": "sock_set_default_impl", 00:22:40.226 "params": { 00:22:40.226 "impl_name": "posix" 00:22:40.226 } 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "method": "sock_impl_set_options", 00:22:40.226 "params": { 00:22:40.226 "impl_name": "ssl", 00:22:40.226 "recv_buf_size": 4096, 00:22:40.226 "send_buf_size": 4096, 00:22:40.226 "enable_recv_pipe": true, 00:22:40.226 "enable_quickack": false, 00:22:40.226 "enable_placement_id": 0, 00:22:40.226 "enable_zerocopy_send_server": true, 00:22:40.226 "enable_zerocopy_send_client": false, 00:22:40.226 "zerocopy_threshold": 0, 00:22:40.226 "tls_version": 0, 00:22:40.226 "enable_ktls": false 00:22:40.226 } 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "method": "sock_impl_set_options", 00:22:40.226 "params": { 00:22:40.226 "impl_name": "posix", 00:22:40.226 "recv_buf_size": 2097152, 00:22:40.226 "send_buf_size": 2097152, 00:22:40.226 "enable_recv_pipe": true, 00:22:40.226 "enable_quickack": false, 00:22:40.226 "enable_placement_id": 0, 00:22:40.226 "enable_zerocopy_send_server": true, 00:22:40.226 "enable_zerocopy_send_client": false, 00:22:40.226 "zerocopy_threshold": 0, 00:22:40.226 "tls_version": 0, 00:22:40.226 "enable_ktls": false 00:22:40.226 } 00:22:40.226 } 00:22:40.226 ] 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "subsystem": "vmd", 00:22:40.226 "config": [] 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "subsystem": "accel", 00:22:40.226 "config": [ 00:22:40.226 { 00:22:40.226 "method": "accel_set_options", 00:22:40.226 "params": { 00:22:40.226 "small_cache_size": 128, 00:22:40.226 "large_cache_size": 16, 00:22:40.226 "task_count": 2048, 00:22:40.226 "sequence_count": 2048, 00:22:40.226 "buf_count": 2048 00:22:40.226 } 00:22:40.226 } 00:22:40.226 ] 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "subsystem": "bdev", 00:22:40.226 "config": [ 00:22:40.226 { 00:22:40.226 "method": "bdev_set_options", 00:22:40.226 "params": { 00:22:40.226 "bdev_io_pool_size": 65535, 00:22:40.226 "bdev_io_cache_size": 256, 00:22:40.226 "bdev_auto_examine": true, 00:22:40.226 "iobuf_small_cache_size": 128, 00:22:40.226 "iobuf_large_cache_size": 16 00:22:40.226 } 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "method": "bdev_raid_set_options", 00:22:40.226 "params": { 00:22:40.226 "process_window_size_kb": 1024 00:22:40.226 } 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "method": "bdev_iscsi_set_options", 00:22:40.226 "params": { 00:22:40.226 "timeout_sec": 30 00:22:40.226 } 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "method": "bdev_nvme_set_options", 00:22:40.226 "params": { 00:22:40.226 "action_on_timeout": "none", 00:22:40.226 "timeout_us": 0, 00:22:40.226 "timeout_admin_us": 0, 00:22:40.226 "keep_alive_timeout_ms": 10000, 00:22:40.226 "arbitration_burst": 0, 00:22:40.226 "low_priority_weight": 0, 00:22:40.226 "medium_priority_weight": 0, 00:22:40.226 "high_priority_weight": 0, 00:22:40.226 "nvme_adminq_poll_period_us": 10000, 00:22:40.226 "nvme_ioq_poll_period_us": 0, 00:22:40.226 "io_queue_requests": 512, 00:22:40.226 "delay_cmd_submit": true, 00:22:40.226 "transport_retry_count": 4, 00:22:40.226 "bdev_retry_count": 3, 00:22:40.226 "transport_ack_timeout": 0, 00:22:40.226 "ctrlr_loss_timeout_sec": 0, 00:22:40.226 "reconnect_delay_sec": 0, 00:22:40.226 "fast_io_fail_timeout_sec": 0, 00:22:40.226 "disable_auto_failback": false, 00:22:40.226 "generate_uuids": false, 00:22:40.226 "transport_tos": 0, 00:22:40.226 "nvme_error_stat": false, 00:22:40.226 "rdma_srq_size": 0, 00:22:40.226 "io_path_stat": false, 00:22:40.226 "allow_accel_sequence": false, 00:22:40.226 "rdma_max_cq_size": 0, 00:22:40.226 "rdma_cm_event_timeout_ms": 0, 00:22:40.226 "dhchap_digests": [ 00:22:40.226 "sha256", 00:22:40.226 "sha384", 00:22:40.226 "sha512" 00:22:40.226 ], 00:22:40.226 "dhchap_dhgroups": [ 00:22:40.226 "null", 00:22:40.226 "ffdhe2048", 00:22:40.226 "ffdhe3072", 00:22:40.226 "ffdhe4096", 00:22:40.226 "ffdhe6144", 00:22:40.226 "ffdhe8192" 00:22:40.226 ] 00:22:40.226 } 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "method": "bdev_nvme_attach_controller", 00:22:40.226 "params": { 00:22:40.226 "name": "TLSTEST", 00:22:40.226 "trtype": "TCP", 00:22:40.226 "adrfam": "IPv4", 00:22:40.226 "traddr": "10.0.0.2", 00:22:40.226 "trsvcid": "4420", 00:22:40.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.226 "prchk_reftag": false, 00:22:40.226 "prchk_guard": false, 00:22:40.226 "ctrlr_loss_timeout_sec": 0, 00:22:40.226 "reconnect_delay_sec": 0, 00:22:40.226 "fast_io_fail_timeout_sec": 0, 00:22:40.226 "psk": "/tmp/tmp.k4Na6MXcvk", 00:22:40.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.226 "hdgst": false, 00:22:40.226 "ddgst": false 00:22:40.226 } 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "method": "bdev_nvme_set_hotplug", 00:22:40.226 "params": { 00:22:40.226 "period_us": 100000, 00:22:40.226 "enable": false 00:22:40.226 } 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "method": "bdev_wait_for_examine" 00:22:40.226 } 00:22:40.226 ] 00:22:40.226 }, 00:22:40.226 { 00:22:40.226 "subsystem": "nbd", 00:22:40.226 "config": [] 00:22:40.226 } 00:22:40.226 ] 00:22:40.226 }' 00:22:40.226 [2024-06-09 09:01:02.666106] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:40.226 [2024-06-09 09:01:02.666157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637561 ] 00:22:40.226 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.226 [2024-06-09 09:01:02.716048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.226 [2024-06-09 09:01:02.768444] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.486 [2024-06-09 09:01:02.893436] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.486 [2024-06-09 09:01:02.893499] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:41.056 09:01:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:41.056 09:01:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:41.056 09:01:03 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:41.056 Running I/O for 10 seconds... 00:22:51.088 00:22:51.088 Latency(us) 00:22:51.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.088 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:51.088 Verification LBA range: start 0x0 length 0x2000 00:22:51.088 TLSTESTn1 : 10.07 1995.50 7.79 0.00 0.00 63945.70 6198.61 152917.33 00:22:51.088 =================================================================================================================== 00:22:51.088 Total : 1995.50 7.79 0.00 0.00 63945.70 6198.61 152917.33 00:22:51.088 0 00:22:51.088 09:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.088 09:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2637561 00:22:51.088 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2637561 ']' 00:22:51.088 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2637561 00:22:51.088 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:51.088 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:51.348 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2637561 00:22:51.348 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:51.348 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:51.348 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2637561' 00:22:51.348 killing process with pid 2637561 00:22:51.348 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2637561 00:22:51.348 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.348 00:22:51.348 Latency(us) 00:22:51.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.348 =================================================================================================================== 00:22:51.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:51.349 [2024-06-09 09:01:13.696896] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2637561 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2637528 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2637528 ']' 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2637528 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2637528 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2637528' 00:22:51.349 killing process with pid 2637528 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2637528 00:22:51.349 [2024-06-09 09:01:13.867640] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:51.349 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2637528 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2639903 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2639903 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2639903 ']' 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:51.609 09:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.609 [2024-06-09 09:01:14.039505] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:51.609 [2024-06-09 09:01:14.039556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.609 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.609 [2024-06-09 09:01:14.103518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.609 [2024-06-09 09:01:14.167750] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.609 [2024-06-09 09:01:14.167790] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.609 [2024-06-09 09:01:14.167797] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.609 [2024-06-09 09:01:14.167804] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.609 [2024-06-09 09:01:14.167809] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.609 [2024-06-09 09:01:14.167830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.k4Na6MXcvk 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k4Na6MXcvk 00:22:52.572 09:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.572 [2024-06-09 09:01:14.994342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.573 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.833 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.833 [2024-06-09 09:01:15.327166] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.833 [2024-06-09 09:01:15.327370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.833 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:53.095 malloc0 00:22:53.095 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k4Na6MXcvk 00:22:53.368 [2024-06-09 09:01:15.815251] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2640263 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2640263 /var/tmp/bdevperf.sock 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2640263 ']' 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:53.368 09:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.368 [2024-06-09 09:01:15.880284] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:53.368 [2024-06-09 09:01:15.880333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640263 ] 00:22:53.368 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.635 [2024-06-09 09:01:15.955992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.635 [2024-06-09 09:01:16.009211] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.206 09:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:54.206 09:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:54.206 09:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.k4Na6MXcvk 00:22:54.466 09:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:54.466 [2024-06-09 09:01:16.939011] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.726 nvme0n1 00:22:54.726 09:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:54.726 Running I/O for 1 seconds... 00:22:55.667 00:22:55.667 Latency(us) 00:22:55.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.667 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:55.667 Verification LBA range: start 0x0 length 0x2000 00:22:55.667 nvme0n1 : 1.07 1642.61 6.42 0.00 0.00 75603.40 4997.12 116217.17 00:22:55.667 =================================================================================================================== 00:22:55.667 Total : 1642.61 6.42 0.00 0.00 75603.40 4997.12 116217.17 00:22:55.667 0 00:22:55.667 09:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2640263 00:22:55.667 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2640263 ']' 00:22:55.667 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2640263 00:22:55.667 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:55.667 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:55.667 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2640263 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2640263' 00:22:55.926 killing process with pid 2640263 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2640263 00:22:55.926 Received shutdown signal, test time was about 1.000000 seconds 00:22:55.926 00:22:55.926 Latency(us) 00:22:55.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.926 =================================================================================================================== 00:22:55.926 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2640263 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2639903 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2639903 ']' 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2639903 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2639903 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2639903' 00:22:55.926 killing process with pid 2639903 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2639903 00:22:55.926 [2024-06-09 09:01:18.439618] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:55.926 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2639903 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2640736 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2640736 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2640736 ']' 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:56.187 09:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.187 [2024-06-09 09:01:18.635906] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:56.187 [2024-06-09 09:01:18.635960] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.187 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.187 [2024-06-09 09:01:18.699920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.448 [2024-06-09 09:01:18.763400] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.448 [2024-06-09 09:01:18.763444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.448 [2024-06-09 09:01:18.763452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.448 [2024-06-09 09:01:18.763458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.448 [2024-06-09 09:01:18.763464] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.448 [2024-06-09 09:01:18.763488] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.020 [2024-06-09 09:01:19.450061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.020 malloc0 00:22:57.020 [2024-06-09 09:01:19.476823] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:57.020 [2024-06-09 09:01:19.477030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2640967 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2640967 /var/tmp/bdevperf.sock 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2640967 ']' 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:57.020 09:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.020 [2024-06-09 09:01:19.560055] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:57.020 [2024-06-09 09:01:19.560145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640967 ] 00:22:57.280 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.280 [2024-06-09 09:01:19.637781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.280 [2024-06-09 09:01:19.690960] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.851 09:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:57.851 09:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:57.851 09:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.k4Na6MXcvk 00:22:58.112 09:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:58.112 [2024-06-09 09:01:20.584630] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.112 nvme0n1 00:22:58.375 09:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.375 Running I/O for 1 seconds... 00:22:59.316 00:22:59.316 Latency(us) 00:22:59.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.316 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:59.316 Verification LBA range: start 0x0 length 0x2000 00:22:59.316 nvme0n1 : 1.08 1593.24 6.22 0.00 0.00 77935.48 4887.89 131072.00 00:22:59.316 =================================================================================================================== 00:22:59.316 Total : 1593.24 6.22 0.00 0.00 77935.48 4887.89 131072.00 00:22:59.316 0 00:22:59.316 09:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:59.316 09:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.316 09:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.576 09:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.576 09:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:59.576 "subsystems": [ 00:22:59.576 { 00:22:59.576 "subsystem": "keyring", 00:22:59.576 "config": [ 00:22:59.576 { 00:22:59.576 "method": "keyring_file_add_key", 00:22:59.576 "params": { 00:22:59.576 "name": "key0", 00:22:59.576 "path": "/tmp/tmp.k4Na6MXcvk" 00:22:59.576 } 00:22:59.576 } 00:22:59.576 ] 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "subsystem": "iobuf", 00:22:59.576 "config": [ 00:22:59.576 { 00:22:59.576 "method": "iobuf_set_options", 00:22:59.576 "params": { 00:22:59.576 "small_pool_count": 8192, 00:22:59.576 "large_pool_count": 1024, 00:22:59.576 "small_bufsize": 8192, 00:22:59.576 "large_bufsize": 135168 00:22:59.576 } 00:22:59.576 } 00:22:59.576 ] 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "subsystem": "sock", 00:22:59.576 "config": [ 00:22:59.576 { 00:22:59.576 "method": "sock_set_default_impl", 00:22:59.576 "params": { 00:22:59.576 "impl_name": "posix" 00:22:59.576 } 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "method": "sock_impl_set_options", 00:22:59.576 "params": { 00:22:59.576 "impl_name": "ssl", 00:22:59.576 "recv_buf_size": 4096, 00:22:59.576 "send_buf_size": 4096, 00:22:59.576 "enable_recv_pipe": true, 00:22:59.576 "enable_quickack": false, 00:22:59.576 "enable_placement_id": 0, 00:22:59.576 "enable_zerocopy_send_server": true, 00:22:59.576 "enable_zerocopy_send_client": false, 00:22:59.576 "zerocopy_threshold": 0, 00:22:59.576 "tls_version": 0, 00:22:59.576 "enable_ktls": false 00:22:59.576 } 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "method": "sock_impl_set_options", 00:22:59.576 "params": { 00:22:59.576 "impl_name": "posix", 00:22:59.576 "recv_buf_size": 2097152, 00:22:59.576 "send_buf_size": 2097152, 00:22:59.576 "enable_recv_pipe": true, 00:22:59.576 "enable_quickack": false, 00:22:59.576 "enable_placement_id": 0, 00:22:59.576 "enable_zerocopy_send_server": true, 00:22:59.576 "enable_zerocopy_send_client": false, 00:22:59.576 "zerocopy_threshold": 0, 00:22:59.576 "tls_version": 0, 00:22:59.576 "enable_ktls": false 00:22:59.576 } 00:22:59.576 } 00:22:59.576 ] 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "subsystem": "vmd", 00:22:59.576 "config": [] 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "subsystem": "accel", 00:22:59.576 "config": [ 00:22:59.576 { 00:22:59.576 "method": "accel_set_options", 00:22:59.576 "params": { 00:22:59.576 "small_cache_size": 128, 00:22:59.576 "large_cache_size": 16, 00:22:59.576 "task_count": 2048, 00:22:59.576 "sequence_count": 2048, 00:22:59.576 "buf_count": 2048 00:22:59.576 } 00:22:59.576 } 00:22:59.576 ] 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "subsystem": "bdev", 00:22:59.576 "config": [ 00:22:59.576 { 00:22:59.576 "method": "bdev_set_options", 00:22:59.576 "params": { 00:22:59.576 "bdev_io_pool_size": 65535, 00:22:59.576 "bdev_io_cache_size": 256, 00:22:59.576 "bdev_auto_examine": true, 00:22:59.576 "iobuf_small_cache_size": 128, 00:22:59.576 "iobuf_large_cache_size": 16 00:22:59.576 } 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "method": "bdev_raid_set_options", 00:22:59.576 "params": { 00:22:59.576 "process_window_size_kb": 1024 00:22:59.576 } 00:22:59.576 }, 00:22:59.576 { 00:22:59.576 "method": "bdev_iscsi_set_options", 00:22:59.576 "params": { 00:22:59.576 "timeout_sec": 30 00:22:59.576 } 00:22:59.576 }, 00:22:59.576 { 00:22:59.577 "method": "bdev_nvme_set_options", 00:22:59.577 "params": { 00:22:59.577 "action_on_timeout": "none", 00:22:59.577 "timeout_us": 0, 00:22:59.577 "timeout_admin_us": 0, 00:22:59.577 "keep_alive_timeout_ms": 10000, 00:22:59.577 "arbitration_burst": 0, 00:22:59.577 "low_priority_weight": 0, 00:22:59.577 "medium_priority_weight": 0, 00:22:59.577 "high_priority_weight": 0, 00:22:59.577 "nvme_adminq_poll_period_us": 10000, 00:22:59.577 "nvme_ioq_poll_period_us": 0, 00:22:59.577 "io_queue_requests": 0, 00:22:59.577 "delay_cmd_submit": true, 00:22:59.577 "transport_retry_count": 4, 00:22:59.577 "bdev_retry_count": 3, 00:22:59.577 "transport_ack_timeout": 0, 00:22:59.577 "ctrlr_loss_timeout_sec": 0, 00:22:59.577 "reconnect_delay_sec": 0, 00:22:59.577 "fast_io_fail_timeout_sec": 0, 00:22:59.577 "disable_auto_failback": false, 00:22:59.577 "generate_uuids": false, 00:22:59.577 "transport_tos": 0, 00:22:59.577 "nvme_error_stat": false, 00:22:59.577 "rdma_srq_size": 0, 00:22:59.577 "io_path_stat": false, 00:22:59.577 "allow_accel_sequence": false, 00:22:59.577 "rdma_max_cq_size": 0, 00:22:59.577 "rdma_cm_event_timeout_ms": 0, 00:22:59.577 "dhchap_digests": [ 00:22:59.577 "sha256", 00:22:59.577 "sha384", 00:22:59.577 "sha512" 00:22:59.577 ], 00:22:59.577 "dhchap_dhgroups": [ 00:22:59.577 "null", 00:22:59.577 "ffdhe2048", 00:22:59.577 "ffdhe3072", 00:22:59.577 "ffdhe4096", 00:22:59.577 "ffdhe6144", 00:22:59.577 "ffdhe8192" 00:22:59.577 ] 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "bdev_nvme_set_hotplug", 00:22:59.577 "params": { 00:22:59.577 "period_us": 100000, 00:22:59.577 "enable": false 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "bdev_malloc_create", 00:22:59.577 "params": { 00:22:59.577 "name": "malloc0", 00:22:59.577 "num_blocks": 8192, 00:22:59.577 "block_size": 4096, 00:22:59.577 "physical_block_size": 4096, 00:22:59.577 "uuid": "d2111e64-ba95-478b-8f58-38cf3d396026", 00:22:59.577 "optimal_io_boundary": 0 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "bdev_wait_for_examine" 00:22:59.577 } 00:22:59.577 ] 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "subsystem": "nbd", 00:22:59.577 "config": [] 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "subsystem": "scheduler", 00:22:59.577 "config": [ 00:22:59.577 { 00:22:59.577 "method": "framework_set_scheduler", 00:22:59.577 "params": { 00:22:59.577 "name": "static" 00:22:59.577 } 00:22:59.577 } 00:22:59.577 ] 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "subsystem": "nvmf", 00:22:59.577 "config": [ 00:22:59.577 { 00:22:59.577 "method": "nvmf_set_config", 00:22:59.577 "params": { 00:22:59.577 "discovery_filter": "match_any", 00:22:59.577 "admin_cmd_passthru": { 00:22:59.577 "identify_ctrlr": false 00:22:59.577 } 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "nvmf_set_max_subsystems", 00:22:59.577 "params": { 00:22:59.577 "max_subsystems": 1024 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "nvmf_set_crdt", 00:22:59.577 "params": { 00:22:59.577 "crdt1": 0, 00:22:59.577 "crdt2": 0, 00:22:59.577 "crdt3": 0 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "nvmf_create_transport", 00:22:59.577 "params": { 00:22:59.577 "trtype": "TCP", 00:22:59.577 "max_queue_depth": 128, 00:22:59.577 "max_io_qpairs_per_ctrlr": 127, 00:22:59.577 "in_capsule_data_size": 4096, 00:22:59.577 "max_io_size": 131072, 00:22:59.577 "io_unit_size": 131072, 00:22:59.577 "max_aq_depth": 128, 00:22:59.577 "num_shared_buffers": 511, 00:22:59.577 "buf_cache_size": 4294967295, 00:22:59.577 "dif_insert_or_strip": false, 00:22:59.577 "zcopy": false, 00:22:59.577 "c2h_success": false, 00:22:59.577 "sock_priority": 0, 00:22:59.577 "abort_timeout_sec": 1, 00:22:59.577 "ack_timeout": 0, 00:22:59.577 "data_wr_pool_size": 0 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "nvmf_create_subsystem", 00:22:59.577 "params": { 00:22:59.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.577 "allow_any_host": false, 00:22:59.577 "serial_number": "00000000000000000000", 00:22:59.577 "model_number": "SPDK bdev Controller", 00:22:59.577 "max_namespaces": 32, 00:22:59.577 "min_cntlid": 1, 00:22:59.577 "max_cntlid": 65519, 00:22:59.577 "ana_reporting": false 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "nvmf_subsystem_add_host", 00:22:59.577 "params": { 00:22:59.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.577 "host": "nqn.2016-06.io.spdk:host1", 00:22:59.577 "psk": "key0" 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "nvmf_subsystem_add_ns", 00:22:59.577 "params": { 00:22:59.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.577 "namespace": { 00:22:59.577 "nsid": 1, 00:22:59.577 "bdev_name": "malloc0", 00:22:59.577 "nguid": "D2111E64BA95478B8F5838CF3D396026", 00:22:59.577 "uuid": "d2111e64-ba95-478b-8f58-38cf3d396026", 00:22:59.577 "no_auto_visible": false 00:22:59.577 } 00:22:59.577 } 00:22:59.577 }, 00:22:59.577 { 00:22:59.577 "method": "nvmf_subsystem_add_listener", 00:22:59.577 "params": { 00:22:59.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.577 "listen_address": { 00:22:59.577 "trtype": "TCP", 00:22:59.577 "adrfam": "IPv4", 00:22:59.577 "traddr": "10.0.0.2", 00:22:59.577 "trsvcid": "4420" 00:22:59.577 }, 00:22:59.577 "secure_channel": true 00:22:59.577 } 00:22:59.577 } 00:22:59.577 ] 00:22:59.577 } 00:22:59.577 ] 00:22:59.577 }' 00:22:59.577 09:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:59.838 09:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:59.838 "subsystems": [ 00:22:59.838 { 00:22:59.838 "subsystem": "keyring", 00:22:59.838 "config": [ 00:22:59.838 { 00:22:59.838 "method": "keyring_file_add_key", 00:22:59.838 "params": { 00:22:59.838 "name": "key0", 00:22:59.838 "path": "/tmp/tmp.k4Na6MXcvk" 00:22:59.838 } 00:22:59.838 } 00:22:59.838 ] 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "subsystem": "iobuf", 00:22:59.838 "config": [ 00:22:59.838 { 00:22:59.838 "method": "iobuf_set_options", 00:22:59.838 "params": { 00:22:59.838 "small_pool_count": 8192, 00:22:59.838 "large_pool_count": 1024, 00:22:59.838 "small_bufsize": 8192, 00:22:59.838 "large_bufsize": 135168 00:22:59.838 } 00:22:59.838 } 00:22:59.838 ] 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "subsystem": "sock", 00:22:59.838 "config": [ 00:22:59.838 { 00:22:59.838 "method": "sock_set_default_impl", 00:22:59.838 "params": { 00:22:59.838 "impl_name": "posix" 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "sock_impl_set_options", 00:22:59.838 "params": { 00:22:59.838 "impl_name": "ssl", 00:22:59.838 "recv_buf_size": 4096, 00:22:59.838 "send_buf_size": 4096, 00:22:59.838 "enable_recv_pipe": true, 00:22:59.838 "enable_quickack": false, 00:22:59.838 "enable_placement_id": 0, 00:22:59.838 "enable_zerocopy_send_server": true, 00:22:59.838 "enable_zerocopy_send_client": false, 00:22:59.838 "zerocopy_threshold": 0, 00:22:59.838 "tls_version": 0, 00:22:59.838 "enable_ktls": false 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "sock_impl_set_options", 00:22:59.838 "params": { 00:22:59.838 "impl_name": "posix", 00:22:59.838 "recv_buf_size": 2097152, 00:22:59.838 "send_buf_size": 2097152, 00:22:59.838 "enable_recv_pipe": true, 00:22:59.838 "enable_quickack": false, 00:22:59.838 "enable_placement_id": 0, 00:22:59.838 "enable_zerocopy_send_server": true, 00:22:59.838 "enable_zerocopy_send_client": false, 00:22:59.838 "zerocopy_threshold": 0, 00:22:59.838 "tls_version": 0, 00:22:59.838 "enable_ktls": false 00:22:59.838 } 00:22:59.838 } 00:22:59.838 ] 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "subsystem": "vmd", 00:22:59.838 "config": [] 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "subsystem": "accel", 00:22:59.838 "config": [ 00:22:59.838 { 00:22:59.838 "method": "accel_set_options", 00:22:59.838 "params": { 00:22:59.838 "small_cache_size": 128, 00:22:59.838 "large_cache_size": 16, 00:22:59.838 "task_count": 2048, 00:22:59.838 "sequence_count": 2048, 00:22:59.838 "buf_count": 2048 00:22:59.838 } 00:22:59.838 } 00:22:59.838 ] 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "subsystem": "bdev", 00:22:59.838 "config": [ 00:22:59.838 { 00:22:59.838 "method": "bdev_set_options", 00:22:59.838 "params": { 00:22:59.838 "bdev_io_pool_size": 65535, 00:22:59.838 "bdev_io_cache_size": 256, 00:22:59.838 "bdev_auto_examine": true, 00:22:59.838 "iobuf_small_cache_size": 128, 00:22:59.838 "iobuf_large_cache_size": 16 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "bdev_raid_set_options", 00:22:59.838 "params": { 00:22:59.838 "process_window_size_kb": 1024 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "bdev_iscsi_set_options", 00:22:59.838 "params": { 00:22:59.838 "timeout_sec": 30 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "bdev_nvme_set_options", 00:22:59.838 "params": { 00:22:59.838 "action_on_timeout": "none", 00:22:59.838 "timeout_us": 0, 00:22:59.838 "timeout_admin_us": 0, 00:22:59.838 "keep_alive_timeout_ms": 10000, 00:22:59.838 "arbitration_burst": 0, 00:22:59.838 "low_priority_weight": 0, 00:22:59.838 "medium_priority_weight": 0, 00:22:59.838 "high_priority_weight": 0, 00:22:59.838 "nvme_adminq_poll_period_us": 10000, 00:22:59.838 "nvme_ioq_poll_period_us": 0, 00:22:59.838 "io_queue_requests": 512, 00:22:59.838 "delay_cmd_submit": true, 00:22:59.838 "transport_retry_count": 4, 00:22:59.838 "bdev_retry_count": 3, 00:22:59.838 "transport_ack_timeout": 0, 00:22:59.838 "ctrlr_loss_timeout_sec": 0, 00:22:59.838 "reconnect_delay_sec": 0, 00:22:59.838 "fast_io_fail_timeout_sec": 0, 00:22:59.838 "disable_auto_failback": false, 00:22:59.838 "generate_uuids": false, 00:22:59.838 "transport_tos": 0, 00:22:59.838 "nvme_error_stat": false, 00:22:59.838 "rdma_srq_size": 0, 00:22:59.838 "io_path_stat": false, 00:22:59.838 "allow_accel_sequence": false, 00:22:59.838 "rdma_max_cq_size": 0, 00:22:59.838 "rdma_cm_event_timeout_ms": 0, 00:22:59.838 "dhchap_digests": [ 00:22:59.838 "sha256", 00:22:59.838 "sha384", 00:22:59.838 "sha512" 00:22:59.838 ], 00:22:59.838 "dhchap_dhgroups": [ 00:22:59.838 "null", 00:22:59.838 "ffdhe2048", 00:22:59.838 "ffdhe3072", 00:22:59.838 "ffdhe4096", 00:22:59.838 "ffdhe6144", 00:22:59.838 "ffdhe8192" 00:22:59.838 ] 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "bdev_nvme_attach_controller", 00:22:59.838 "params": { 00:22:59.838 "name": "nvme0", 00:22:59.838 "trtype": "TCP", 00:22:59.838 "adrfam": "IPv4", 00:22:59.838 "traddr": "10.0.0.2", 00:22:59.838 "trsvcid": "4420", 00:22:59.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.838 "prchk_reftag": false, 00:22:59.838 "prchk_guard": false, 00:22:59.838 "ctrlr_loss_timeout_sec": 0, 00:22:59.838 "reconnect_delay_sec": 0, 00:22:59.838 "fast_io_fail_timeout_sec": 0, 00:22:59.838 "psk": "key0", 00:22:59.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.838 "hdgst": false, 00:22:59.838 "ddgst": false 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "bdev_nvme_set_hotplug", 00:22:59.838 "params": { 00:22:59.838 "period_us": 100000, 00:22:59.838 "enable": false 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "bdev_enable_histogram", 00:22:59.838 "params": { 00:22:59.838 "name": "nvme0n1", 00:22:59.838 "enable": true 00:22:59.838 } 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "method": "bdev_wait_for_examine" 00:22:59.838 } 00:22:59.838 ] 00:22:59.838 }, 00:22:59.838 { 00:22:59.838 "subsystem": "nbd", 00:22:59.838 "config": [] 00:22:59.838 } 00:22:59.838 ] 00:22:59.838 }' 00:22:59.838 09:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2640967 00:22:59.838 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2640967 ']' 00:22:59.838 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2640967 00:22:59.838 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2640967 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2640967' 00:22:59.839 killing process with pid 2640967 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2640967 00:22:59.839 Received shutdown signal, test time was about 1.000000 seconds 00:22:59.839 00:22:59.839 Latency(us) 00:22:59.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.839 =================================================================================================================== 00:22:59.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2640967 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2640736 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2640736 ']' 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2640736 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:59.839 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2640736 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2640736' 00:23:00.100 killing process with pid 2640736 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2640736 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2640736 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:00.100 09:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:00.100 "subsystems": [ 00:23:00.100 { 00:23:00.100 "subsystem": "keyring", 00:23:00.100 "config": [ 00:23:00.100 { 00:23:00.100 "method": "keyring_file_add_key", 00:23:00.100 "params": { 00:23:00.100 "name": "key0", 00:23:00.100 "path": "/tmp/tmp.k4Na6MXcvk" 00:23:00.100 } 00:23:00.100 } 00:23:00.100 ] 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "subsystem": "iobuf", 00:23:00.100 "config": [ 00:23:00.100 { 00:23:00.100 "method": "iobuf_set_options", 00:23:00.100 "params": { 00:23:00.100 "small_pool_count": 8192, 00:23:00.100 "large_pool_count": 1024, 00:23:00.100 "small_bufsize": 8192, 00:23:00.100 "large_bufsize": 135168 00:23:00.100 } 00:23:00.100 } 00:23:00.100 ] 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "subsystem": "sock", 00:23:00.100 "config": [ 00:23:00.100 { 00:23:00.100 "method": "sock_set_default_impl", 00:23:00.100 "params": { 00:23:00.100 "impl_name": "posix" 00:23:00.100 } 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "method": "sock_impl_set_options", 00:23:00.100 "params": { 00:23:00.100 "impl_name": "ssl", 00:23:00.100 "recv_buf_size": 4096, 00:23:00.100 "send_buf_size": 4096, 00:23:00.100 "enable_recv_pipe": true, 00:23:00.100 "enable_quickack": false, 00:23:00.100 "enable_placement_id": 0, 00:23:00.100 "enable_zerocopy_send_server": true, 00:23:00.100 "enable_zerocopy_send_client": false, 00:23:00.100 "zerocopy_threshold": 0, 00:23:00.100 "tls_version": 0, 00:23:00.100 "enable_ktls": false 00:23:00.100 } 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "method": "sock_impl_set_options", 00:23:00.100 "params": { 00:23:00.100 "impl_name": "posix", 00:23:00.100 "recv_buf_size": 2097152, 00:23:00.100 "send_buf_size": 2097152, 00:23:00.100 "enable_recv_pipe": true, 00:23:00.100 "enable_quickack": false, 00:23:00.100 "enable_placement_id": 0, 00:23:00.100 "enable_zerocopy_send_server": true, 00:23:00.100 "enable_zerocopy_send_client": false, 00:23:00.100 "zerocopy_threshold": 0, 00:23:00.100 "tls_version": 0, 00:23:00.100 "enable_ktls": false 00:23:00.100 } 00:23:00.100 } 00:23:00.100 ] 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "subsystem": "vmd", 00:23:00.100 "config": [] 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "subsystem": "accel", 00:23:00.100 "config": [ 00:23:00.100 { 00:23:00.100 "method": "accel_set_options", 00:23:00.100 "params": { 00:23:00.100 "small_cache_size": 128, 00:23:00.100 "large_cache_size": 16, 00:23:00.100 "task_count": 2048, 00:23:00.100 "sequence_count": 2048, 00:23:00.100 "buf_count": 2048 00:23:00.100 } 00:23:00.100 } 00:23:00.100 ] 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "subsystem": "bdev", 00:23:00.100 "config": [ 00:23:00.100 { 00:23:00.100 "method": "bdev_set_options", 00:23:00.100 "params": { 00:23:00.100 "bdev_io_pool_size": 65535, 00:23:00.100 "bdev_io_cache_size": 256, 00:23:00.100 "bdev_auto_examine": true, 00:23:00.100 "iobuf_small_cache_size": 128, 00:23:00.100 "iobuf_large_cache_size": 16 00:23:00.100 } 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "method": "bdev_raid_set_options", 00:23:00.100 "params": { 00:23:00.100 "process_window_size_kb": 1024 00:23:00.100 } 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "method": "bdev_iscsi_set_options", 00:23:00.100 "params": { 00:23:00.100 "timeout_sec": 30 00:23:00.100 } 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "method": "bdev_nvme_set_options", 00:23:00.100 "params": { 00:23:00.100 "action_on_timeout": "none", 00:23:00.100 "timeout_us": 0, 00:23:00.100 "timeout_admin_us": 0, 00:23:00.100 "keep_alive_timeout_ms": 10000, 00:23:00.100 "arbitration_burst": 0, 00:23:00.100 "low_priority_weight": 0, 00:23:00.100 "medium_priority_weight": 0, 00:23:00.100 "high_priority_weight": 0, 00:23:00.100 "nvme_adminq_poll_period_us": 10000, 00:23:00.100 "nvme_ioq_poll_period_us": 0, 00:23:00.100 "io_queue_requests": 0, 00:23:00.100 "delay_cmd_submit": true, 00:23:00.100 "transport_retry_count": 4, 00:23:00.100 "bdev_retry_count": 3, 00:23:00.100 "transport_ack_timeout": 0, 00:23:00.100 "ctrlr_loss_timeout_sec": 0, 00:23:00.100 "reconnect_delay_sec": 0, 00:23:00.100 "fast_io_fail_timeout_sec": 0, 00:23:00.100 "disable_auto_failback": false, 00:23:00.100 "generate_uuids": false, 00:23:00.100 "transport_tos": 0, 00:23:00.100 "nvme_error_stat": false, 00:23:00.100 "rdma_srq_size": 0, 00:23:00.100 "io_path_stat": false, 00:23:00.100 "allow_accel_sequence": false, 00:23:00.100 "rdma_max_cq_size": 0, 00:23:00.100 "rdma_cm_event_timeout_ms": 0, 00:23:00.100 "dhchap_digests": [ 00:23:00.100 "sha256", 00:23:00.100 "sha384", 00:23:00.100 "sha512" 00:23:00.100 ], 00:23:00.100 "dhchap_dhgroups": [ 00:23:00.100 "null", 00:23:00.100 "ffdhe2048", 00:23:00.100 "ffdhe3072", 00:23:00.100 "ffdhe4096", 00:23:00.100 "ffdhe6144", 00:23:00.100 "ffdhe8192" 00:23:00.100 ] 00:23:00.100 } 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "method": "bdev_nvme_set_hotplug", 00:23:00.100 "params": { 00:23:00.100 "period_us": 100000, 00:23:00.100 "enable": false 00:23:00.100 } 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "method": "bdev_malloc_create", 00:23:00.100 "params": { 00:23:00.100 "name": "malloc0", 00:23:00.100 "num_blocks": 8192, 00:23:00.100 "block_size": 4096, 00:23:00.100 "physical_block_size": 4096, 00:23:00.100 "uuid": "d2111e64-ba95-478b-8f58-38cf3d396026", 00:23:00.100 "optimal_io_boundary": 0 00:23:00.100 } 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "method": "bdev_wait_for_examine" 00:23:00.100 } 00:23:00.100 ] 00:23:00.100 }, 00:23:00.100 { 00:23:00.100 "subsystem": "nbd", 00:23:00.100 "config": [] 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "subsystem": "scheduler", 00:23:00.101 "config": [ 00:23:00.101 { 00:23:00.101 "method": "framework_set_scheduler", 00:23:00.101 "params": { 00:23:00.101 "name": "static" 00:23:00.101 } 00:23:00.101 } 00:23:00.101 ] 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "subsystem": "nvmf", 00:23:00.101 "config": [ 00:23:00.101 { 00:23:00.101 "method": "nvmf_set_config", 00:23:00.101 "params": { 00:23:00.101 "discovery_filter": "match_any", 00:23:00.101 "admin_cmd_passthru": { 00:23:00.101 "identify_ctrlr": false 00:23:00.101 } 00:23:00.101 } 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "method": "nvmf_set_max_subsystems", 00:23:00.101 "params": { 00:23:00.101 "max_subsystems": 1024 00:23:00.101 } 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "method": "nvmf_set_crdt", 00:23:00.101 "params": { 00:23:00.101 "crdt1": 0, 00:23:00.101 "crdt2": 0, 00:23:00.101 "crdt3": 0 00:23:00.101 } 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "method": "nvmf_create_transport", 00:23:00.101 "params": { 00:23:00.101 "trtype": "TCP", 00:23:00.101 "max_queue_depth": 128, 00:23:00.101 "max_io_qpairs_per_ctrlr": 127, 00:23:00.101 "in_capsule_data_size": 4096, 00:23:00.101 "max_io_size": 131072, 00:23:00.101 "io_unit_size": 131072, 00:23:00.101 "max_aq_depth": 128, 00:23:00.101 "num_shared_buffers": 511, 00:23:00.101 "buf_cache_size": 4294967295, 00:23:00.101 "dif_insert_or_strip": false, 00:23:00.101 "zcopy": false, 00:23:00.101 "c2h_success": false, 00:23:00.101 "sock_priority": 0, 00:23:00.101 "abort_timeout_sec": 1, 00:23:00.101 "ack_timeout": 0, 00:23:00.101 "data_wr_pool_size": 0 00:23:00.101 } 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "method": "nvmf_create_subsystem", 00:23:00.101 "params": { 00:23:00.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.101 "allow_any_host": false, 00:23:00.101 "serial_number": "00000000000000000000", 00:23:00.101 "model_number": "SPDK bdev Controller", 00:23:00.101 "max_namespaces": 32, 00:23:00.101 "min_cntlid": 1, 00:23:00.101 "max_cntlid": 65519, 00:23:00.101 "ana_reporting": false 00:23:00.101 } 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "method": "nvmf_subsystem_add_host", 00:23:00.101 "params": { 00:23:00.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.101 "host": "nqn.2016-06.io.spdk:host1", 00:23:00.101 "psk": "key0" 00:23:00.101 } 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "method": "nvmf_subsystem_add_ns", 00:23:00.101 "params": { 00:23:00.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.101 "namespace": { 00:23:00.101 "nsid": 1, 00:23:00.101 "bdev_name": "malloc0", 00:23:00.101 "nguid": "D2111E64BA95478B8F5838CF3D396026", 00:23:00.101 "uuid": "d2111e64-ba95-478b-8f58-38cf3d396026", 00:23:00.101 "no_auto_visible": false 00:23:00.101 } 00:23:00.101 } 00:23:00.101 }, 00:23:00.101 { 00:23:00.101 "method": "nvmf_subsystem_add_listener", 00:23:00.101 "params": { 00:23:00.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.101 "listen_address": { 00:23:00.101 "trtype": "TCP", 00:23:00.101 "adrfam": "IPv4", 00:23:00.101 "traddr": "10.0.0.2", 00:23:00.101 "trsvcid": "4420" 00:23:00.101 }, 00:23:00.101 "secure_channel": true 00:23:00.101 } 00:23:00.101 } 00:23:00.101 ] 00:23:00.101 } 00:23:00.101 ] 00:23:00.101 }' 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2641653 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2641653 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2641653 ']' 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:00.101 09:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.101 [2024-06-09 09:01:22.600931] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:00.101 [2024-06-09 09:01:22.600985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.101 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.362 [2024-06-09 09:01:22.664435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.362 [2024-06-09 09:01:22.728728] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.362 [2024-06-09 09:01:22.728765] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.362 [2024-06-09 09:01:22.728776] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.362 [2024-06-09 09:01:22.728782] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.362 [2024-06-09 09:01:22.728788] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.362 [2024-06-09 09:01:22.728837] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.622 [2024-06-09 09:01:22.926053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.622 [2024-06-09 09:01:22.958057] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.622 [2024-06-09 09:01:22.968709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2641683 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2641683 /var/tmp/bdevperf.sock 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2641683 ']' 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.883 09:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:00.883 "subsystems": [ 00:23:00.883 { 00:23:00.883 "subsystem": "keyring", 00:23:00.883 "config": [ 00:23:00.883 { 00:23:00.883 "method": "keyring_file_add_key", 00:23:00.883 "params": { 00:23:00.883 "name": "key0", 00:23:00.883 "path": "/tmp/tmp.k4Na6MXcvk" 00:23:00.883 } 00:23:00.883 } 00:23:00.883 ] 00:23:00.883 }, 00:23:00.883 { 00:23:00.883 "subsystem": "iobuf", 00:23:00.883 "config": [ 00:23:00.883 { 00:23:00.883 "method": "iobuf_set_options", 00:23:00.883 "params": { 00:23:00.883 "small_pool_count": 8192, 00:23:00.883 "large_pool_count": 1024, 00:23:00.883 "small_bufsize": 8192, 00:23:00.883 "large_bufsize": 135168 00:23:00.883 } 00:23:00.883 } 00:23:00.883 ] 00:23:00.883 }, 00:23:00.883 { 00:23:00.883 "subsystem": "sock", 00:23:00.883 "config": [ 00:23:00.883 { 00:23:00.883 "method": "sock_set_default_impl", 00:23:00.883 "params": { 00:23:00.883 "impl_name": "posix" 00:23:00.883 } 00:23:00.883 }, 00:23:00.883 { 00:23:00.883 "method": "sock_impl_set_options", 00:23:00.883 "params": { 00:23:00.883 "impl_name": "ssl", 00:23:00.883 "recv_buf_size": 4096, 00:23:00.883 "send_buf_size": 4096, 00:23:00.883 "enable_recv_pipe": true, 00:23:00.883 "enable_quickack": false, 00:23:00.883 "enable_placement_id": 0, 00:23:00.883 "enable_zerocopy_send_server": true, 00:23:00.883 "enable_zerocopy_send_client": false, 00:23:00.883 "zerocopy_threshold": 0, 00:23:00.883 "tls_version": 0, 00:23:00.883 "enable_ktls": false 00:23:00.883 } 00:23:00.883 }, 00:23:00.883 { 00:23:00.883 "method": "sock_impl_set_options", 00:23:00.883 "params": { 00:23:00.883 "impl_name": "posix", 00:23:00.883 "recv_buf_size": 2097152, 00:23:00.883 "send_buf_size": 2097152, 00:23:00.883 "enable_recv_pipe": true, 00:23:00.883 "enable_quickack": false, 00:23:00.883 "enable_placement_id": 0, 00:23:00.883 "enable_zerocopy_send_server": true, 00:23:00.883 "enable_zerocopy_send_client": false, 00:23:00.883 "zerocopy_threshold": 0, 00:23:00.883 "tls_version": 0, 00:23:00.883 "enable_ktls": false 00:23:00.883 } 00:23:00.883 } 00:23:00.883 ] 00:23:00.883 }, 00:23:00.883 { 00:23:00.883 "subsystem": "vmd", 00:23:00.883 "config": [] 00:23:00.883 }, 00:23:00.883 { 00:23:00.883 "subsystem": "accel", 00:23:00.883 "config": [ 00:23:00.883 { 00:23:00.883 "method": "accel_set_options", 00:23:00.883 "params": { 00:23:00.883 "small_cache_size": 128, 00:23:00.883 "large_cache_size": 16, 00:23:00.883 "task_count": 2048, 00:23:00.883 "sequence_count": 2048, 00:23:00.883 "buf_count": 2048 00:23:00.883 } 00:23:00.883 } 00:23:00.883 ] 00:23:00.883 }, 00:23:00.883 { 00:23:00.883 "subsystem": "bdev", 00:23:00.883 "config": [ 00:23:00.883 { 00:23:00.883 "method": "bdev_set_options", 00:23:00.883 "params": { 00:23:00.883 "bdev_io_pool_size": 65535, 00:23:00.883 "bdev_io_cache_size": 256, 00:23:00.883 "bdev_auto_examine": true, 00:23:00.883 "iobuf_small_cache_size": 128, 00:23:00.883 "iobuf_large_cache_size": 16 00:23:00.883 } 00:23:00.883 }, 00:23:00.883 { 00:23:00.883 "method": "bdev_raid_set_options", 00:23:00.884 "params": { 00:23:00.884 "process_window_size_kb": 1024 00:23:00.884 } 00:23:00.884 }, 00:23:00.884 { 00:23:00.884 "method": "bdev_iscsi_set_options", 00:23:00.884 "params": { 00:23:00.884 "timeout_sec": 30 00:23:00.884 } 00:23:00.884 }, 00:23:00.884 { 00:23:00.884 "method": "bdev_nvme_set_options", 00:23:00.884 "params": { 00:23:00.884 "action_on_timeout": "none", 00:23:00.884 "timeout_us": 0, 00:23:00.884 "timeout_admin_us": 0, 00:23:00.884 "keep_alive_timeout_ms": 10000, 00:23:00.884 "arbitration_burst": 0, 00:23:00.884 "low_priority_weight": 0, 00:23:00.884 "medium_priority_weight": 0, 00:23:00.884 "high_priority_weight": 0, 00:23:00.884 "nvme_adminq_poll_period_us": 10000, 00:23:00.884 "nvme_ioq_poll_period_us": 0, 00:23:00.884 "io_queue_requests": 512, 00:23:00.884 "delay_cmd_submit": true, 00:23:00.884 "transport_retry_count": 4, 00:23:00.884 "bdev_retry_count": 3, 00:23:00.884 "transport_ack_timeout": 0, 00:23:00.884 "ctrlr_loss_timeout_sec": 0, 00:23:00.884 "reconnect_delay_sec": 0, 00:23:00.884 "fast_io_fail_timeout_sec": 0, 00:23:00.884 "disable_auto_failback": false, 00:23:00.884 "generate_uuids": false, 00:23:00.884 "transport_tos": 0, 00:23:00.884 "nvme_error_stat": false, 00:23:00.884 "rdma_srq_size": 0, 00:23:00.884 "io_path_stat": false, 00:23:00.884 "allow_accel_sequence": false, 00:23:00.884 "rdma_max_cq_size": 0, 00:23:00.884 "rdma_cm_event_timeout_ms": 0, 00:23:00.884 "dhchap_digests": [ 00:23:00.884 "sha256", 00:23:00.884 "sha384", 00:23:00.884 "sha512" 00:23:00.884 ], 00:23:00.884 "dhchap_dhgroups": [ 00:23:00.884 "null", 00:23:00.884 "ffdhe2048", 00:23:00.884 "ffdhe3072", 00:23:00.884 "ffdhe4096", 00:23:00.884 "ffdhe6144", 00:23:00.884 "ffdhe8192" 00:23:00.884 ] 00:23:00.884 } 00:23:00.884 }, 00:23:00.884 { 00:23:00.884 "method": "bdev_nvme_attach_controller", 00:23:00.884 "params": { 00:23:00.884 "name": "nvme0", 00:23:00.884 "trtype": "TCP", 00:23:00.884 "adrfam": "IPv4", 00:23:00.884 "traddr": "10.0.0.2", 00:23:00.884 "trsvcid": "4420", 00:23:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.884 "prchk_reftag": false, 00:23:00.884 "prchk_guard": false, 00:23:00.884 "ctrlr_loss_timeout_sec": 0, 00:23:00.884 "reconnect_delay_sec": 0, 00:23:00.884 "fast_io_fail_timeout_sec": 0, 00:23:00.884 "psk": "key0", 00:23:00.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.884 "hdgst": false, 00:23:00.884 "ddgst": false 00:23:00.884 } 00:23:00.884 }, 00:23:00.884 { 00:23:00.884 "method": "bdev_nvme_set_hotplug", 00:23:00.884 "params": { 00:23:00.884 "period_us": 100000, 00:23:00.884 "enable": false 00:23:00.884 } 00:23:00.884 }, 00:23:00.884 { 00:23:00.884 "method": "bdev_enable_histogram", 00:23:00.884 "params": { 00:23:00.884 "name": "nvme0n1", 00:23:00.884 "enable": true 00:23:00.884 } 00:23:00.884 }, 00:23:00.884 { 00:23:00.884 "method": "bdev_wait_for_examine" 00:23:00.884 } 00:23:00.884 ] 00:23:00.884 }, 00:23:00.884 { 00:23:00.884 "subsystem": "nbd", 00:23:00.884 "config": [] 00:23:00.884 } 00:23:00.884 ] 00:23:00.884 }' 00:23:00.884 [2024-06-09 09:01:23.440982] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:00.884 [2024-06-09 09:01:23.441035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641683 ] 00:23:01.145 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.145 [2024-06-09 09:01:23.515766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.145 [2024-06-09 09:01:23.569257] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.145 [2024-06-09 09:01:23.702656] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.715 09:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:01.715 09:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:01.715 09:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.715 09:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:01.977 09:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.977 09:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.977 Running I/O for 1 seconds... 00:23:03.373 00:23:03.374 Latency(us) 00:23:03.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.374 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:03.374 Verification LBA range: start 0x0 length 0x2000 00:23:03.374 nvme0n1 : 1.07 1477.20 5.77 0.00 0.00 84234.29 6144.00 141557.76 00:23:03.374 =================================================================================================================== 00:23:03.374 Total : 1477.20 5.77 0.00 0.00 84234.29 6144.00 141557.76 00:23:03.374 0 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:03.374 nvmf_trace.0 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2641683 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2641683 ']' 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2641683 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2641683 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:03.374 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2641683' 00:23:03.375 killing process with pid 2641683 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2641683 00:23:03.375 Received shutdown signal, test time was about 1.000000 seconds 00:23:03.375 00:23:03.375 Latency(us) 00:23:03.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.375 =================================================================================================================== 00:23:03.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2641683 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.375 rmmod nvme_tcp 00:23:03.375 rmmod nvme_fabrics 00:23:03.375 rmmod nvme_keyring 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2641653 ']' 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2641653 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2641653 ']' 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2641653 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2641653 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:03.375 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2641653' 00:23:03.375 killing process with pid 2641653 00:23:03.376 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2641653 00:23:03.376 09:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2641653 00:23:03.637 09:01:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.637 09:01:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.637 09:01:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.637 09:01:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.637 09:01:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.637 09:01:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.637 09:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.637 09:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.549 09:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.549 09:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.30lzoywImQ /tmp/tmp.7FS8Zw6pgO /tmp/tmp.k4Na6MXcvk 00:23:05.821 00:23:05.821 real 1m23.855s 00:23:05.821 user 2m5.771s 00:23:05.821 sys 0m30.032s 00:23:05.821 09:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:05.821 09:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.821 ************************************ 00:23:05.821 END TEST nvmf_tls 00:23:05.821 ************************************ 00:23:05.821 09:01:28 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:05.821 09:01:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:05.821 09:01:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:05.821 09:01:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.821 ************************************ 00:23:05.821 START TEST nvmf_fips 00:23:05.821 ************************************ 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:05.821 * Looking for test storage... 00:23:05.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:05.821 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:05.822 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:23:06.234 Error setting digest 00:23:06.234 00F2A76D837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:06.234 00F2A76D837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.234 09:01:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:12.826 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:12.826 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:12.826 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:12.826 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.826 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:23:13.086 00:23:13.086 --- 10.0.0.2 ping statistics --- 00:23:13.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.086 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.703 ms 00:23:13.086 00:23:13.086 --- 10.0.0.1 ping statistics --- 00:23:13.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.086 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.086 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2646381 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2646381 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 2646381 ']' 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:13.347 09:01:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:13.347 [2024-06-09 09:01:35.753731] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:13.347 [2024-06-09 09:01:35.753802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.347 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.347 [2024-06-09 09:01:35.841654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.608 [2024-06-09 09:01:35.934594] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.608 [2024-06-09 09:01:35.934649] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.608 [2024-06-09 09:01:35.934657] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.608 [2024-06-09 09:01:35.934665] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.608 [2024-06-09 09:01:35.934671] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.608 [2024-06-09 09:01:35.934706] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.179 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:14.179 [2024-06-09 09:01:36.710904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.179 [2024-06-09 09:01:36.726906] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.179 [2024-06-09 09:01:36.727148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.440 [2024-06-09 09:01:36.756789] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:14.440 malloc0 00:23:14.440 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.440 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2646727 00:23:14.440 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2646727 /var/tmp/bdevperf.sock 00:23:14.440 09:01:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.440 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 2646727 ']' 00:23:14.440 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.440 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:14.441 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.441 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:14.441 09:01:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:14.441 [2024-06-09 09:01:36.857712] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:14.441 [2024-06-09 09:01:36.857787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2646727 ] 00:23:14.441 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.441 [2024-06-09 09:01:36.914745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.441 [2024-06-09 09:01:36.978430] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.381 09:01:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:15.381 09:01:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:23:15.381 09:01:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:15.381 [2024-06-09 09:01:37.757943] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.381 [2024-06-09 09:01:37.758015] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:15.381 TLSTESTn1 00:23:15.381 09:01:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:15.642 Running I/O for 10 seconds... 00:23:25.665 00:23:25.665 Latency(us) 00:23:25.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.665 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:25.665 Verification LBA range: start 0x0 length 0x2000 00:23:25.665 TLSTESTn1 : 10.08 1989.24 7.77 0.00 0.00 64121.93 5242.88 121460.05 00:23:25.665 =================================================================================================================== 00:23:25.665 Total : 1989.24 7.77 0.00 0.00 64121.93 5242.88 121460.05 00:23:25.665 0 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:25.665 nvmf_trace.0 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2646727 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 2646727 ']' 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 2646727 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:25.665 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2646727 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2646727' 00:23:25.925 killing process with pid 2646727 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 2646727 00:23:25.925 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.925 00:23:25.925 Latency(us) 00:23:25.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.925 =================================================================================================================== 00:23:25.925 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.925 [2024-06-09 09:01:48.226519] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 2646727 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.925 rmmod nvme_tcp 00:23:25.925 rmmod nvme_fabrics 00:23:25.925 rmmod nvme_keyring 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2646381 ']' 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2646381 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 2646381 ']' 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 2646381 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2646381 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2646381' 00:23:25.925 killing process with pid 2646381 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 2646381 00:23:25.925 [2024-06-09 09:01:48.468460] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:25.925 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 2646381 00:23:26.186 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:26.186 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:26.186 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:26.186 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:26.186 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:26.186 09:01:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.186 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.186 09:01:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.104 09:01:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:28.104 09:01:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:28.104 00:23:28.104 real 0m22.476s 00:23:28.104 user 0m22.736s 00:23:28.104 sys 0m10.448s 00:23:28.104 09:01:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:28.104 09:01:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:28.104 ************************************ 00:23:28.104 END TEST nvmf_fips 00:23:28.104 ************************************ 00:23:28.365 09:01:50 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:28.365 09:01:50 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:28.365 09:01:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:28.365 09:01:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:28.365 09:01:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.365 ************************************ 00:23:28.365 START TEST nvmf_fuzz 00:23:28.365 ************************************ 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:28.365 * Looking for test storage... 00:23:28.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:28.365 09:01:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.574 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:36.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:36.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:36.575 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:36.575 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:36.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:23:36.575 00:23:36.575 --- 10.0.0.2 ping statistics --- 00:23:36.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.575 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:23:36.575 00:23:36.575 --- 10.0.0.1 ping statistics --- 00:23:36.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.575 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2653058 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2653058 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@830 -- # '[' -z 2653058 ']' 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:36.575 09:01:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.575 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:36.575 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@863 -- # return 0 00:23:36.575 09:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.576 Malloc0 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:36.576 09:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:08.724 Fuzzing completed. Shutting down the fuzz application 00:24:08.724 00:24:08.724 Dumping successful admin opcodes: 00:24:08.724 8, 9, 10, 24, 00:24:08.724 Dumping successful io opcodes: 00:24:08.724 0, 9, 00:24:08.724 NS: 0x200003aeff00 I/O qp, Total commands completed: 912372, total successful commands: 5307, random_seed: 282531648 00:24:08.724 NS: 0x200003aeff00 admin qp, Total commands completed: 115533, total successful commands: 944, random_seed: 2879410880 00:24:08.724 09:02:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:08.724 Fuzzing completed. Shutting down the fuzz application 00:24:08.724 00:24:08.724 Dumping successful admin opcodes: 00:24:08.724 24, 00:24:08.724 Dumping successful io opcodes: 00:24:08.724 00:24:08.724 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 263218037 00:24:08.724 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 263289423 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.724 rmmod nvme_tcp 00:24:08.724 rmmod nvme_fabrics 00:24:08.724 rmmod nvme_keyring 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.724 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2653058 ']' 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2653058 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@949 -- # '[' -z 2653058 ']' 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # kill -0 2653058 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # uname 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2653058 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2653058' 00:24:08.725 killing process with pid 2653058 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@968 -- # kill 2653058 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@973 -- # wait 2653058 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.725 09:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.642 09:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:10.642 09:02:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:10.642 00:24:10.642 real 0m42.260s 00:24:10.642 user 0m56.392s 00:24:10.642 sys 0m15.097s 00:24:10.642 09:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:10.642 09:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:10.642 ************************************ 00:24:10.642 END TEST nvmf_fuzz 00:24:10.642 ************************************ 00:24:10.642 09:02:33 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:10.642 09:02:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:10.642 09:02:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:10.642 09:02:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:10.642 ************************************ 00:24:10.642 START TEST nvmf_multiconnection 00:24:10.642 ************************************ 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:10.642 * Looking for test storage... 00:24:10.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.642 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:10.643 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.903 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.903 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.903 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.903 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.903 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.903 09:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.903 09:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.904 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:10.904 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:10.904 09:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.904 09:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:17.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:17.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:17.488 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:17.488 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.488 09:02:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:24:17.749 00:24:17.749 --- 10.0.0.2 ping statistics --- 00:24:17.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.749 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:24:17.749 00:24:17.749 --- 10.0.0.1 ping statistics --- 00:24:17.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.749 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.749 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.750 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.750 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:17.750 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.750 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:17.750 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2663420 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2663420 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@830 -- # '[' -z 2663420 ']' 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:18.010 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.010 [2024-06-09 09:02:40.362158] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:24:18.010 [2024-06-09 09:02:40.362207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.010 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.010 [2024-06-09 09:02:40.428813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.010 [2024-06-09 09:02:40.497164] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.010 [2024-06-09 09:02:40.497201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.010 [2024-06-09 09:02:40.497209] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.010 [2024-06-09 09:02:40.497216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.010 [2024-06-09 09:02:40.497221] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.010 [2024-06-09 09:02:40.497385] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.010 [2024-06-09 09:02:40.497500] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.010 [2024-06-09 09:02:40.497617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.010 [2024-06-09 09:02:40.497618] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@863 -- # return 0 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 [2024-06-09 09:02:40.641265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 Malloc1 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 [2024-06-09 09:02:40.708873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 Malloc2 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 Malloc3 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.270 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.530 Malloc4 00:24:18.530 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.530 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:18.530 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.530 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.530 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.530 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:18.530 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.530 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 Malloc5 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 Malloc6 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 Malloc7 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 Malloc8 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.531 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 Malloc9 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 Malloc10 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.791 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.792 Malloc11 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.792 09:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:20.703 09:02:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:20.703 09:02:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:20.703 09:02:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:20.703 09:02:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:20.703 09:02:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:22.614 09:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:22.614 09:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:22.614 09:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK1 00:24:22.614 09:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:22.614 09:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:22.614 09:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:22.614 09:02:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.614 09:02:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:24.040 09:02:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:24.040 09:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:24.040 09:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.040 09:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:24.040 09:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:25.959 09:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:25.959 09:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:25.959 09:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK2 00:24:25.959 09:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:25.959 09:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.959 09:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:25.959 09:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.959 09:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:27.874 09:02:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:27.874 09:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:27.874 09:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.874 09:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:27.874 09:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:29.787 09:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:29.787 09:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:29.787 09:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK3 00:24:29.787 09:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:29.787 09:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:29.787 09:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:29.787 09:02:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:29.787 09:02:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:31.697 09:02:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:31.697 09:02:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:31.697 09:02:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:31.697 09:02:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:31.697 09:02:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:33.610 09:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:33.610 09:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:33.610 09:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK4 00:24:33.610 09:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:33.610 09:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:33.610 09:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:33.610 09:02:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.610 09:02:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:34.996 09:02:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:34.996 09:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:34.996 09:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:34.996 09:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:34.996 09:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:36.907 09:02:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:36.907 09:02:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:36.907 09:02:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK5 00:24:36.907 09:02:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:36.907 09:02:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.907 09:02:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:36.907 09:02:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.907 09:02:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:38.819 09:03:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:38.819 09:03:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:38.819 09:03:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:38.819 09:03:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:38.819 09:03:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:40.866 09:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:40.866 09:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:40.866 09:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK6 00:24:40.866 09:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:40.866 09:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.866 09:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:40.866 09:03:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.866 09:03:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:42.778 09:03:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:42.778 09:03:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:42.778 09:03:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.778 09:03:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:42.778 09:03:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:44.685 09:03:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:44.685 09:03:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:44.685 09:03:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK7 00:24:44.685 09:03:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:44.685 09:03:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.685 09:03:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:44.685 09:03:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.685 09:03:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:46.629 09:03:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:46.629 09:03:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:46.629 09:03:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.629 09:03:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:46.629 09:03:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:48.541 09:03:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:48.541 09:03:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:48.541 09:03:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK8 00:24:48.541 09:03:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:48.541 09:03:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.541 09:03:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:48.541 09:03:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.541 09:03:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:51.084 09:03:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:51.084 09:03:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:51.084 09:03:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:51.084 09:03:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:51.084 09:03:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:52.468 09:03:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:52.468 09:03:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:52.468 09:03:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK9 00:24:52.729 09:03:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:52.729 09:03:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.729 09:03:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:52.729 09:03:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.729 09:03:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:54.639 09:03:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:54.639 09:03:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:54.639 09:03:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:54.639 09:03:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:54.639 09:03:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:56.551 09:03:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:56.551 09:03:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:56.551 09:03:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK10 00:24:56.551 09:03:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:56.551 09:03:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.551 09:03:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:56.551 09:03:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.551 09:03:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:58.465 09:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:58.465 09:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:58.465 09:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.465 09:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:58.465 09:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:25:00.378 09:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:25:00.378 09:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:00.378 09:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK11 00:25:00.378 09:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:25:00.378 09:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.378 09:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:25:00.378 09:03:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:00.378 [global] 00:25:00.378 thread=1 00:25:00.378 invalidate=1 00:25:00.378 rw=read 00:25:00.378 time_based=1 00:25:00.378 runtime=10 00:25:00.378 ioengine=libaio 00:25:00.378 direct=1 00:25:00.378 bs=262144 00:25:00.378 iodepth=64 00:25:00.378 norandommap=1 00:25:00.378 numjobs=1 00:25:00.378 00:25:00.378 [job0] 00:25:00.378 filename=/dev/nvme0n1 00:25:00.378 [job1] 00:25:00.378 filename=/dev/nvme10n1 00:25:00.378 [job2] 00:25:00.378 filename=/dev/nvme1n1 00:25:00.378 [job3] 00:25:00.378 filename=/dev/nvme2n1 00:25:00.378 [job4] 00:25:00.378 filename=/dev/nvme3n1 00:25:00.378 [job5] 00:25:00.378 filename=/dev/nvme4n1 00:25:00.642 [job6] 00:25:00.642 filename=/dev/nvme5n1 00:25:00.642 [job7] 00:25:00.642 filename=/dev/nvme6n1 00:25:00.642 [job8] 00:25:00.642 filename=/dev/nvme7n1 00:25:00.642 [job9] 00:25:00.642 filename=/dev/nvme8n1 00:25:00.642 [job10] 00:25:00.642 filename=/dev/nvme9n1 00:25:00.642 Could not set queue depth (nvme0n1) 00:25:00.642 Could not set queue depth (nvme10n1) 00:25:00.642 Could not set queue depth (nvme1n1) 00:25:00.642 Could not set queue depth (nvme2n1) 00:25:00.642 Could not set queue depth (nvme3n1) 00:25:00.642 Could not set queue depth (nvme4n1) 00:25:00.642 Could not set queue depth (nvme5n1) 00:25:00.642 Could not set queue depth (nvme6n1) 00:25:00.642 Could not set queue depth (nvme7n1) 00:25:00.642 Could not set queue depth (nvme8n1) 00:25:00.642 Could not set queue depth (nvme9n1) 00:25:01.211 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:01.211 fio-3.35 00:25:01.211 Starting 11 threads 00:25:13.479 00:25:13.479 job0: (groupid=0, jobs=1): err= 0: pid=2672734: Sun Jun 9 09:03:33 2024 00:25:13.479 read: IOPS=946, BW=237MiB/s (248MB/s)(2382MiB/10066msec) 00:25:13.479 slat (usec): min=6, max=26212, avg=1024.19, stdev=2582.52 00:25:13.480 clat (msec): min=23, max=151, avg=66.50, stdev=24.11 00:25:13.480 lat (msec): min=23, max=151, avg=67.53, stdev=24.46 00:25:13.480 clat percentiles (msec): 00:25:13.480 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 45], 00:25:13.480 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:25:13.480 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 103], 95.00th=[ 113], 00:25:13.480 | 99.00th=[ 128], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 140], 00:25:13.480 | 99.99th=[ 153] 00:25:13.480 bw ( KiB/s): min=147456, max=448512, per=10.54%, avg=242329.60, stdev=76886.56, samples=20 00:25:13.480 iops : min= 576, max= 1752, avg=946.60, stdev=300.34, samples=20 00:25:13.480 lat (msec) : 50=27.99%, 100=60.69%, 250=11.32% 00:25:13.480 cpu : usr=0.40%, sys=3.04%, ctx=2076, majf=0, minf=4097 00:25:13.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:13.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.480 issued rwts: total=9529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.480 job1: (groupid=0, jobs=1): err= 0: pid=2672735: Sun Jun 9 09:03:33 2024 00:25:13.480 read: IOPS=1460, BW=365MiB/s (383MB/s)(3658MiB/10014msec) 00:25:13.480 slat (usec): min=5, max=82687, avg=461.67, stdev=2564.14 00:25:13.480 clat (usec): min=1504, max=163153, avg=43300.47, stdev=28957.41 00:25:13.480 lat (usec): min=1552, max=172694, avg=43762.14, stdev=29324.27 00:25:13.480 clat percentiles (msec): 00:25:13.480 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 13], 20.00th=[ 19], 00:25:13.480 | 30.00th=[ 25], 40.00th=[ 29], 50.00th=[ 33], 60.00th=[ 46], 00:25:13.480 | 70.00th=[ 55], 80.00th=[ 71], 90.00th=[ 88], 95.00th=[ 100], 00:25:13.480 | 99.00th=[ 117], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 148], 00:25:13.480 | 99.99th=[ 153] 00:25:13.480 bw ( KiB/s): min=221696, max=516096, per=16.22%, avg=372915.20, stdev=80176.70, samples=20 00:25:13.480 iops : min= 866, max= 2016, avg=1456.70, stdev=313.19, samples=20 00:25:13.480 lat (msec) : 2=0.03%, 4=1.42%, 10=6.28%, 20=14.24%, 50=42.28% 00:25:13.480 lat (msec) : 100=31.07%, 250=4.67% 00:25:13.480 cpu : usr=0.63%, sys=4.21%, ctx=3655, majf=0, minf=4097 00:25:13.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:13.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.480 issued rwts: total=14630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.480 job2: (groupid=0, jobs=1): err= 0: pid=2672736: Sun Jun 9 09:03:33 2024 00:25:13.480 read: IOPS=713, BW=178MiB/s (187MB/s)(1802MiB/10094msec) 00:25:13.480 slat (usec): min=7, max=54343, avg=1385.30, stdev=3632.46 00:25:13.480 clat (msec): min=27, max=210, avg=88.15, stdev=20.34 00:25:13.480 lat (msec): min=27, max=210, avg=89.54, stdev=20.59 00:25:13.480 clat percentiles (msec): 00:25:13.480 | 1.00th=[ 49], 5.00th=[ 57], 10.00th=[ 63], 20.00th=[ 71], 00:25:13.480 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 93], 00:25:13.480 | 70.00th=[ 100], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 121], 00:25:13.480 | 99.00th=[ 134], 99.50th=[ 161], 99.90th=[ 207], 99.95th=[ 211], 00:25:13.480 | 99.99th=[ 211] 00:25:13.480 bw ( KiB/s): min=133632, max=250368, per=7.96%, avg=182874.70, stdev=34945.20, samples=20 00:25:13.480 iops : min= 522, max= 978, avg=714.35, stdev=136.51, samples=20 00:25:13.480 lat (msec) : 50=1.26%, 100=70.98%, 250=27.75% 00:25:13.480 cpu : usr=0.23%, sys=2.59%, ctx=1500, majf=0, minf=4097 00:25:13.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:13.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.480 issued rwts: total=7206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.480 job3: (groupid=0, jobs=1): err= 0: pid=2672737: Sun Jun 9 09:03:33 2024 00:25:13.480 read: IOPS=734, BW=184MiB/s (193MB/s)(1852MiB/10089msec) 00:25:13.480 slat (usec): min=6, max=57388, avg=1129.09, stdev=3559.28 00:25:13.480 clat (msec): min=3, max=203, avg=85.95, stdev=28.22 00:25:13.480 lat (msec): min=3, max=232, avg=87.08, stdev=28.72 00:25:13.480 clat percentiles (msec): 00:25:13.480 | 1.00th=[ 19], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 63], 00:25:13.480 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 97], 00:25:13.480 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 125], 00:25:13.480 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 194], 99.95th=[ 205], 00:25:13.480 | 99.99th=[ 205] 00:25:13.480 bw ( KiB/s): min=138240, max=276992, per=8.18%, avg=188032.00, stdev=41803.15, samples=20 00:25:13.480 iops : min= 540, max= 1082, avg=734.50, stdev=163.29, samples=20 00:25:13.480 lat (msec) : 4=0.01%, 10=0.24%, 20=1.03%, 50=9.81%, 100=52.83% 00:25:13.480 lat (msec) : 250=36.08% 00:25:13.480 cpu : usr=0.22%, sys=2.36%, ctx=1899, majf=0, minf=4097 00:25:13.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:13.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.480 issued rwts: total=7409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.480 job4: (groupid=0, jobs=1): err= 0: pid=2672741: Sun Jun 9 09:03:33 2024 00:25:13.480 read: IOPS=740, BW=185MiB/s (194MB/s)(1867MiB/10090msec) 00:25:13.480 slat (usec): min=6, max=47419, avg=1058.49, stdev=3346.55 00:25:13.480 clat (msec): min=3, max=206, avg=85.30, stdev=28.66 00:25:13.480 lat (msec): min=3, max=206, avg=86.36, stdev=29.08 00:25:13.480 clat percentiles (msec): 00:25:13.480 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 61], 00:25:13.480 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 96], 00:25:13.480 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 116], 95.00th=[ 124], 00:25:13.480 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 197], 99.95th=[ 207], 00:25:13.480 | 99.99th=[ 207] 00:25:13.480 bw ( KiB/s): min=141312, max=288768, per=8.25%, avg=189568.00, stdev=46057.50, samples=20 00:25:13.480 iops : min= 552, max= 1128, avg=740.50, stdev=179.91, samples=20 00:25:13.480 lat (msec) : 4=0.04%, 10=0.08%, 20=1.41%, 50=9.92%, 100=54.31% 00:25:13.480 lat (msec) : 250=34.24% 00:25:13.480 cpu : usr=0.28%, sys=2.30%, ctx=2006, majf=0, minf=4097 00:25:13.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:13.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.480 issued rwts: total=7468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.480 job5: (groupid=0, jobs=1): err= 0: pid=2672751: Sun Jun 9 09:03:33 2024 00:25:13.480 read: IOPS=843, BW=211MiB/s (221MB/s)(2113MiB/10022msec) 00:25:13.480 slat (usec): min=6, max=47439, avg=1180.14, stdev=3016.77 00:25:13.480 clat (msec): min=13, max=132, avg=74.60, stdev=20.14 00:25:13.480 lat (msec): min=21, max=133, avg=75.78, stdev=20.38 00:25:13.480 clat percentiles (msec): 00:25:13.480 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 56], 00:25:13.480 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 80], 00:25:13.480 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 110], 00:25:13.480 | 99.00th=[ 120], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 133], 00:25:13.480 | 99.99th=[ 133] 00:25:13.480 bw ( KiB/s): min=154112, max=316416, per=9.34%, avg=214784.00, stdev=50377.84, samples=20 00:25:13.480 iops : min= 602, max= 1236, avg=839.00, stdev=196.79, samples=20 00:25:13.480 lat (msec) : 20=0.01%, 50=8.96%, 100=77.72%, 250=13.31% 00:25:13.480 cpu : usr=0.31%, sys=2.96%, ctx=1797, majf=0, minf=4097 00:25:13.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:13.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.480 issued rwts: total=8453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.480 job6: (groupid=0, jobs=1): err= 0: pid=2672758: Sun Jun 9 09:03:33 2024 00:25:13.480 read: IOPS=747, BW=187MiB/s (196MB/s)(1887MiB/10098msec) 00:25:13.480 slat (usec): min=5, max=95834, avg=1185.10, stdev=3650.62 00:25:13.480 clat (msec): min=4, max=212, avg=84.34, stdev=29.17 00:25:13.480 lat (msec): min=4, max=223, avg=85.53, stdev=29.50 00:25:13.480 clat percentiles (msec): 00:25:13.480 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 62], 00:25:13.480 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 90], 00:25:13.480 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 118], 95.00th=[ 132], 00:25:13.480 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 201], 99.95th=[ 213], 00:25:13.480 | 99.99th=[ 213] 00:25:13.480 bw ( KiB/s): min=121856, max=305152, per=8.33%, avg=191564.80, stdev=44925.45, samples=20 00:25:13.480 iops : min= 476, max= 1192, avg=748.30, stdev=175.49, samples=20 00:25:13.480 lat (msec) : 10=0.13%, 20=1.59%, 50=10.42%, 100=61.46%, 250=26.40% 00:25:13.480 cpu : usr=0.22%, sys=2.22%, ctx=1727, majf=0, minf=4097 00:25:13.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:13.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.480 issued rwts: total=7546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.480 job7: (groupid=0, jobs=1): err= 0: pid=2672763: Sun Jun 9 09:03:33 2024 00:25:13.481 read: IOPS=771, BW=193MiB/s (202MB/s)(1933MiB/10019msec) 00:25:13.481 slat (usec): min=7, max=35303, avg=1289.59, stdev=3126.29 00:25:13.481 clat (msec): min=17, max=132, avg=81.54, stdev=16.01 00:25:13.481 lat (msec): min=21, max=132, avg=82.83, stdev=16.17 00:25:13.481 clat percentiles (msec): 00:25:13.481 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 68], 00:25:13.481 | 30.00th=[ 73], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 86], 00:25:13.481 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 103], 95.00th=[ 108], 00:25:13.481 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 132], 00:25:13.481 | 99.99th=[ 133] 00:25:13.481 bw ( KiB/s): min=159232, max=241664, per=8.54%, avg=196275.20, stdev=24001.87, samples=20 00:25:13.481 iops : min= 622, max= 944, avg=766.70, stdev=93.76, samples=20 00:25:13.481 lat (msec) : 20=0.01%, 50=1.44%, 100=84.77%, 250=13.78% 00:25:13.481 cpu : usr=0.29%, sys=2.92%, ctx=1640, majf=0, minf=4097 00:25:13.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:13.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.481 issued rwts: total=7730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.481 job8: (groupid=0, jobs=1): err= 0: pid=2672779: Sun Jun 9 09:03:33 2024 00:25:13.481 read: IOPS=694, BW=174MiB/s (182MB/s)(1752MiB/10094msec) 00:25:13.481 slat (usec): min=8, max=53465, avg=1417.53, stdev=3631.66 00:25:13.481 clat (msec): min=22, max=209, avg=90.68, stdev=21.11 00:25:13.481 lat (msec): min=22, max=210, avg=92.10, stdev=21.34 00:25:13.481 clat percentiles (msec): 00:25:13.481 | 1.00th=[ 41], 5.00th=[ 53], 10.00th=[ 64], 20.00th=[ 77], 00:25:13.481 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 96], 00:25:13.481 | 70.00th=[ 102], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 123], 00:25:13.481 | 99.00th=[ 134], 99.50th=[ 165], 99.90th=[ 199], 99.95th=[ 199], 00:25:13.481 | 99.99th=[ 211] 00:25:13.481 bw ( KiB/s): min=134656, max=251392, per=7.73%, avg=177715.20, stdev=30476.01, samples=20 00:25:13.481 iops : min= 526, max= 982, avg=694.20, stdev=119.05, samples=20 00:25:13.481 lat (msec) : 50=4.22%, 100=63.40%, 250=32.37% 00:25:13.481 cpu : usr=0.29%, sys=2.50%, ctx=1457, majf=0, minf=3535 00:25:13.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:13.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.481 issued rwts: total=7006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.481 job9: (groupid=0, jobs=1): err= 0: pid=2672786: Sun Jun 9 09:03:33 2024 00:25:13.481 read: IOPS=664, BW=166MiB/s (174MB/s)(1677MiB/10094msec) 00:25:13.481 slat (usec): min=6, max=73621, avg=1487.84, stdev=3851.10 00:25:13.481 clat (msec): min=43, max=221, avg=94.70, stdev=19.46 00:25:13.481 lat (msec): min=44, max=221, avg=96.19, stdev=19.65 00:25:13.481 clat percentiles (msec): 00:25:13.481 | 1.00th=[ 55], 5.00th=[ 64], 10.00th=[ 71], 20.00th=[ 80], 00:25:13.481 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 94], 60.00th=[ 100], 00:25:13.481 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 118], 95.00th=[ 125], 00:25:13.481 | 99.00th=[ 146], 99.50th=[ 171], 99.90th=[ 199], 99.95th=[ 213], 00:25:13.481 | 99.99th=[ 222] 00:25:13.481 bw ( KiB/s): min=132608, max=221184, per=7.40%, avg=170137.60, stdev=24481.56, samples=20 00:25:13.481 iops : min= 518, max= 864, avg=664.60, stdev=95.63, samples=20 00:25:13.481 lat (msec) : 50=0.37%, 100=61.69%, 250=37.93% 00:25:13.481 cpu : usr=0.28%, sys=2.41%, ctx=1372, majf=0, minf=4097 00:25:13.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:13.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.481 issued rwts: total=6709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.481 job10: (groupid=0, jobs=1): err= 0: pid=2672791: Sun Jun 9 09:03:33 2024 00:25:13.481 read: IOPS=692, BW=173MiB/s (181MB/s)(1746MiB/10087msec) 00:25:13.481 slat (usec): min=6, max=104364, avg=1320.71, stdev=3920.20 00:25:13.481 clat (msec): min=18, max=203, avg=91.01, stdev=28.16 00:25:13.481 lat (msec): min=18, max=203, avg=92.33, stdev=28.49 00:25:13.481 clat percentiles (msec): 00:25:13.481 | 1.00th=[ 35], 5.00th=[ 54], 10.00th=[ 59], 20.00th=[ 65], 00:25:13.481 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 99], 00:25:13.481 | 70.00th=[ 106], 80.00th=[ 115], 90.00th=[ 128], 95.00th=[ 140], 00:25:13.481 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 205], 99.95th=[ 205], 00:25:13.481 | 99.99th=[ 205] 00:25:13.481 bw ( KiB/s): min=130048, max=270336, per=7.71%, avg=177152.00, stdev=43970.54, samples=20 00:25:13.481 iops : min= 508, max= 1056, avg=692.00, stdev=171.76, samples=20 00:25:13.481 lat (msec) : 20=0.07%, 50=3.54%, 100=59.04%, 250=37.35% 00:25:13.481 cpu : usr=0.28%, sys=2.03%, ctx=1543, majf=0, minf=4097 00:25:13.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:13.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.481 issued rwts: total=6983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.481 00:25:13.481 Run status group 0 (all jobs): 00:25:13.481 READ: bw=2245MiB/s (2354MB/s), 166MiB/s-365MiB/s (174MB/s-383MB/s), io=22.1GiB (23.8GB), run=10014-10098msec 00:25:13.481 00:25:13.481 Disk stats (read/write): 00:25:13.481 nvme0n1: ios=18674/0, merge=0/0, ticks=1218191/0, in_queue=1218191, util=96.46% 00:25:13.481 nvme10n1: ios=28437/0, merge=0/0, ticks=1228743/0, in_queue=1228743, util=96.74% 00:25:13.481 nvme1n1: ios=14150/0, merge=0/0, ticks=1212385/0, in_queue=1212385, util=97.08% 00:25:13.481 nvme2n1: ios=14557/0, merge=0/0, ticks=1219173/0, in_queue=1219173, util=97.27% 00:25:13.481 nvme3n1: ios=14666/0, merge=0/0, ticks=1220781/0, in_queue=1220781, util=97.41% 00:25:13.481 nvme4n1: ios=16415/0, merge=0/0, ticks=1216835/0, in_queue=1216835, util=97.86% 00:25:13.481 nvme5n1: ios=14804/0, merge=0/0, ticks=1216106/0, in_queue=1216106, util=98.07% 00:25:13.481 nvme6n1: ios=15044/0, merge=0/0, ticks=1216586/0, in_queue=1216586, util=98.24% 00:25:13.481 nvme7n1: ios=13744/0, merge=0/0, ticks=1212927/0, in_queue=1212927, util=98.80% 00:25:13.481 nvme8n1: ios=13146/0, merge=0/0, ticks=1213552/0, in_queue=1213552, util=99.00% 00:25:13.481 nvme9n1: ios=13536/0, merge=0/0, ticks=1221085/0, in_queue=1221085, util=99.21% 00:25:13.481 09:03:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:13.481 [global] 00:25:13.481 thread=1 00:25:13.481 invalidate=1 00:25:13.481 rw=randwrite 00:25:13.481 time_based=1 00:25:13.481 runtime=10 00:25:13.481 ioengine=libaio 00:25:13.481 direct=1 00:25:13.481 bs=262144 00:25:13.481 iodepth=64 00:25:13.481 norandommap=1 00:25:13.481 numjobs=1 00:25:13.481 00:25:13.481 [job0] 00:25:13.481 filename=/dev/nvme0n1 00:25:13.481 [job1] 00:25:13.481 filename=/dev/nvme10n1 00:25:13.481 [job2] 00:25:13.481 filename=/dev/nvme1n1 00:25:13.481 [job3] 00:25:13.481 filename=/dev/nvme2n1 00:25:13.481 [job4] 00:25:13.481 filename=/dev/nvme3n1 00:25:13.481 [job5] 00:25:13.481 filename=/dev/nvme4n1 00:25:13.481 [job6] 00:25:13.481 filename=/dev/nvme5n1 00:25:13.481 [job7] 00:25:13.481 filename=/dev/nvme6n1 00:25:13.481 [job8] 00:25:13.481 filename=/dev/nvme7n1 00:25:13.481 [job9] 00:25:13.481 filename=/dev/nvme8n1 00:25:13.481 [job10] 00:25:13.481 filename=/dev/nvme9n1 00:25:13.481 Could not set queue depth (nvme0n1) 00:25:13.481 Could not set queue depth (nvme10n1) 00:25:13.481 Could not set queue depth (nvme1n1) 00:25:13.481 Could not set queue depth (nvme2n1) 00:25:13.481 Could not set queue depth (nvme3n1) 00:25:13.481 Could not set queue depth (nvme4n1) 00:25:13.481 Could not set queue depth (nvme5n1) 00:25:13.481 Could not set queue depth (nvme6n1) 00:25:13.481 Could not set queue depth (nvme7n1) 00:25:13.481 Could not set queue depth (nvme8n1) 00:25:13.481 Could not set queue depth (nvme9n1) 00:25:13.481 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.481 fio-3.35 00:25:13.481 Starting 11 threads 00:25:23.480 00:25:23.480 job0: (groupid=0, jobs=1): err= 0: pid=2674831: Sun Jun 9 09:03:45 2024 00:25:23.480 write: IOPS=749, BW=187MiB/s (196MB/s)(1890MiB/10089msec); 0 zone resets 00:25:23.480 slat (usec): min=22, max=130713, avg=1194.54, stdev=2838.23 00:25:23.480 clat (msec): min=6, max=267, avg=84.16, stdev=21.36 00:25:23.480 lat (msec): min=8, max=326, avg=85.35, stdev=21.53 00:25:23.480 clat percentiles (msec): 00:25:23.480 | 1.00th=[ 36], 5.00th=[ 66], 10.00th=[ 69], 20.00th=[ 73], 00:25:23.480 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 85], 00:25:23.480 | 70.00th=[ 88], 80.00th=[ 92], 90.00th=[ 101], 95.00th=[ 110], 00:25:23.480 | 99.00th=[ 178], 99.50th=[ 205], 99.90th=[ 255], 99.95th=[ 262], 00:25:23.480 | 99.99th=[ 268] 00:25:23.480 bw ( KiB/s): min=118272, max=222208, per=13.72%, avg=191897.60, stdev=23823.19, samples=20 00:25:23.480 iops : min= 462, max= 868, avg=749.60, stdev=93.06, samples=20 00:25:23.480 lat (msec) : 10=0.04%, 20=0.28%, 50=1.59%, 100=88.23%, 250=9.72% 00:25:23.480 lat (msec) : 500=0.15% 00:25:23.480 cpu : usr=1.52%, sys=2.33%, ctx=2479, majf=0, minf=1 00:25:23.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:23.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.480 issued rwts: total=0,7559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.480 job1: (groupid=0, jobs=1): err= 0: pid=2674832: Sun Jun 9 09:03:45 2024 00:25:23.480 write: IOPS=300, BW=75.2MiB/s (78.8MB/s)(765MiB/10169msec); 0 zone resets 00:25:23.480 slat (usec): min=24, max=780797, avg=3174.55, stdev=22118.31 00:25:23.480 clat (msec): min=14, max=1645, avg=209.57, stdev=263.94 00:25:23.480 lat (msec): min=14, max=1646, avg=212.74, stdev=266.52 00:25:23.480 clat percentiles (msec): 00:25:23.480 | 1.00th=[ 35], 5.00th=[ 71], 10.00th=[ 77], 20.00th=[ 82], 00:25:23.480 | 30.00th=[ 86], 40.00th=[ 93], 50.00th=[ 103], 60.00th=[ 174], 00:25:23.480 | 70.00th=[ 239], 80.00th=[ 268], 90.00th=[ 288], 95.00th=[ 642], 00:25:23.480 | 99.00th=[ 1519], 99.50th=[ 1586], 99.90th=[ 1586], 99.95th=[ 1653], 00:25:23.480 | 99.99th=[ 1653] 00:25:23.480 bw ( KiB/s): min= 1024, max=200192, per=5.48%, avg=76672.00, stdev=66230.61, samples=20 00:25:23.480 iops : min= 4, max= 782, avg=299.50, stdev=258.71, samples=20 00:25:23.480 lat (msec) : 20=0.13%, 50=1.70%, 100=47.16%, 250=24.98%, 500=19.91% 00:25:23.480 lat (msec) : 750=1.77%, 1000=0.49%, 2000=3.86% 00:25:23.480 cpu : usr=0.59%, sys=0.89%, ctx=973, majf=0, minf=1 00:25:23.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:25:23.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.480 issued rwts: total=0,3058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.480 job2: (groupid=0, jobs=1): err= 0: pid=2674846: Sun Jun 9 09:03:45 2024 00:25:23.480 write: IOPS=372, BW=93.2MiB/s (97.7MB/s)(943MiB/10117msec); 0 zone resets 00:25:23.480 slat (usec): min=18, max=92608, avg=2472.11, stdev=5681.30 00:25:23.480 clat (msec): min=7, max=353, avg=169.11, stdev=54.36 00:25:23.480 lat (msec): min=7, max=354, avg=171.58, stdev=54.94 00:25:23.480 clat percentiles (msec): 00:25:23.480 | 1.00th=[ 51], 5.00th=[ 88], 10.00th=[ 101], 20.00th=[ 136], 00:25:23.480 | 30.00th=[ 146], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 165], 00:25:23.480 | 70.00th=[ 180], 80.00th=[ 220], 90.00th=[ 253], 95.00th=[ 271], 00:25:23.480 | 99.00th=[ 313], 99.50th=[ 334], 99.90th=[ 351], 99.95th=[ 355], 00:25:23.480 | 99.99th=[ 355] 00:25:23.480 bw ( KiB/s): min=57344, max=146944, per=6.79%, avg=94934.35, stdev=26088.51, samples=20 00:25:23.480 iops : min= 224, max= 574, avg=370.80, stdev=101.91, samples=20 00:25:23.480 lat (msec) : 10=0.03%, 20=0.08%, 50=0.88%, 100=8.86%, 250=79.53% 00:25:23.480 lat (msec) : 500=10.63% 00:25:23.480 cpu : usr=0.89%, sys=0.99%, ctx=1228, majf=0, minf=1 00:25:23.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:23.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.481 issued rwts: total=0,3771,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.481 job3: (groupid=0, jobs=1): err= 0: pid=2674847: Sun Jun 9 09:03:45 2024 00:25:23.481 write: IOPS=546, BW=137MiB/s (143MB/s)(1381MiB/10101msec); 0 zone resets 00:25:23.481 slat (usec): min=18, max=131984, avg=1570.42, stdev=4347.72 00:25:23.481 clat (msec): min=9, max=283, avg=115.42, stdev=66.50 00:25:23.481 lat (msec): min=11, max=285, avg=116.99, stdev=67.46 00:25:23.481 clat percentiles (msec): 00:25:23.481 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 60], 00:25:23.481 | 30.00th=[ 64], 40.00th=[ 77], 50.00th=[ 93], 60.00th=[ 116], 00:25:23.481 | 70.00th=[ 148], 80.00th=[ 188], 90.00th=[ 222], 95.00th=[ 243], 00:25:23.481 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 284], 00:25:23.481 | 99.99th=[ 284] 00:25:23.481 bw ( KiB/s): min=65536, max=287744, per=10.00%, avg=139811.65, stdev=75904.67, samples=20 00:25:23.481 iops : min= 256, max= 1124, avg=546.10, stdev=296.52, samples=20 00:25:23.481 lat (msec) : 10=0.02%, 20=1.30%, 50=9.38%, 100=42.98%, 250=43.28% 00:25:23.481 lat (msec) : 500=3.04% 00:25:23.481 cpu : usr=1.24%, sys=1.57%, ctx=2365, majf=0, minf=1 00:25:23.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:23.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.481 issued rwts: total=0,5524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.481 job4: (groupid=0, jobs=1): err= 0: pid=2674848: Sun Jun 9 09:03:45 2024 00:25:23.481 write: IOPS=642, BW=161MiB/s (168MB/s)(1621MiB/10088msec); 0 zone resets 00:25:23.481 slat (usec): min=18, max=24019, avg=1476.75, stdev=2845.56 00:25:23.481 clat (msec): min=10, max=228, avg=98.06, stdev=31.41 00:25:23.481 lat (msec): min=10, max=228, avg=99.54, stdev=31.81 00:25:23.481 clat percentiles (msec): 00:25:23.481 | 1.00th=[ 33], 5.00th=[ 59], 10.00th=[ 67], 20.00th=[ 74], 00:25:23.481 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 101], 00:25:23.481 | 70.00th=[ 120], 80.00th=[ 128], 90.00th=[ 136], 95.00th=[ 146], 00:25:23.481 | 99.00th=[ 197], 99.50th=[ 211], 99.90th=[ 228], 99.95th=[ 228], 00:25:23.481 | 99.99th=[ 230] 00:25:23.481 bw ( KiB/s): min=111104, max=243200, per=11.76%, avg=164393.95, stdev=41080.20, samples=20 00:25:23.481 iops : min= 434, max= 950, avg=642.15, stdev=160.47, samples=20 00:25:23.481 lat (msec) : 20=0.15%, 50=3.27%, 100=56.85%, 250=39.73% 00:25:23.481 cpu : usr=1.51%, sys=1.83%, ctx=1975, majf=0, minf=1 00:25:23.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:23.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.481 issued rwts: total=0,6484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.481 job5: (groupid=0, jobs=1): err= 0: pid=2674849: Sun Jun 9 09:03:45 2024 00:25:23.481 write: IOPS=412, BW=103MiB/s (108MB/s)(1042MiB/10111msec); 0 zone resets 00:25:23.481 slat (usec): min=20, max=93214, avg=2227.59, stdev=5112.71 00:25:23.481 clat (msec): min=10, max=269, avg=153.01, stdev=50.07 00:25:23.481 lat (msec): min=11, max=269, avg=155.24, stdev=50.72 00:25:23.481 clat percentiles (msec): 00:25:23.481 | 1.00th=[ 35], 5.00th=[ 68], 10.00th=[ 84], 20.00th=[ 108], 00:25:23.481 | 30.00th=[ 138], 40.00th=[ 148], 50.00th=[ 155], 60.00th=[ 161], 00:25:23.481 | 70.00th=[ 174], 80.00th=[ 199], 90.00th=[ 222], 95.00th=[ 241], 00:25:23.481 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 266], 99.95th=[ 271], 00:25:23.481 | 99.99th=[ 271] 00:25:23.481 bw ( KiB/s): min=69632, max=200192, per=7.51%, avg=105073.10, stdev=30842.04, samples=20 00:25:23.481 iops : min= 272, max= 782, avg=410.40, stdev=120.47, samples=20 00:25:23.481 lat (msec) : 20=0.34%, 50=1.73%, 100=15.14%, 250=81.11%, 500=1.68% 00:25:23.481 cpu : usr=0.87%, sys=1.31%, ctx=1446, majf=0, minf=1 00:25:23.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:23.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.481 issued rwts: total=0,4167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.481 job6: (groupid=0, jobs=1): err= 0: pid=2674850: Sun Jun 9 09:03:45 2024 00:25:23.481 write: IOPS=500, BW=125MiB/s (131MB/s)(1266MiB/10106msec); 0 zone resets 00:25:23.481 slat (usec): min=23, max=120635, avg=1797.57, stdev=4570.10 00:25:23.481 clat (msec): min=7, max=317, avg=125.86, stdev=55.06 00:25:23.481 lat (msec): min=7, max=321, avg=127.66, stdev=55.72 00:25:23.481 clat percentiles (msec): 00:25:23.481 | 1.00th=[ 23], 5.00th=[ 52], 10.00th=[ 65], 20.00th=[ 74], 00:25:23.481 | 30.00th=[ 83], 40.00th=[ 110], 50.00th=[ 126], 60.00th=[ 132], 00:25:23.481 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 199], 95.00th=[ 218], 00:25:23.481 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 313], 00:25:23.481 | 99.99th=[ 317] 00:25:23.481 bw ( KiB/s): min=77824, max=224768, per=9.15%, avg=127990.20, stdev=47649.07, samples=20 00:25:23.481 iops : min= 304, max= 878, avg=499.95, stdev=186.14, samples=20 00:25:23.481 lat (msec) : 10=0.10%, 20=0.63%, 50=3.95%, 100=32.85%, 250=60.42% 00:25:23.481 lat (msec) : 500=2.05% 00:25:23.481 cpu : usr=1.14%, sys=1.46%, ctx=1809, majf=0, minf=1 00:25:23.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:23.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.481 issued rwts: total=0,5063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.481 job7: (groupid=0, jobs=1): err= 0: pid=2674851: Sun Jun 9 09:03:45 2024 00:25:23.481 write: IOPS=427, BW=107MiB/s (112MB/s)(1081MiB/10113msec); 0 zone resets 00:25:23.481 slat (usec): min=27, max=55765, avg=2215.74, stdev=4609.89 00:25:23.481 clat (msec): min=6, max=286, avg=147.39, stdev=55.17 00:25:23.481 lat (msec): min=6, max=289, avg=149.60, stdev=55.84 00:25:23.481 clat percentiles (msec): 00:25:23.481 | 1.00th=[ 31], 5.00th=[ 69], 10.00th=[ 80], 20.00th=[ 105], 00:25:23.481 | 30.00th=[ 125], 40.00th=[ 130], 50.00th=[ 134], 60.00th=[ 144], 00:25:23.481 | 70.00th=[ 169], 80.00th=[ 199], 90.00th=[ 234], 95.00th=[ 251], 00:25:23.481 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 284], 99.95th=[ 284], 00:25:23.481 | 99.99th=[ 288] 00:25:23.481 bw ( KiB/s): min=65536, max=201216, per=7.80%, avg=109117.50, stdev=34053.56, samples=20 00:25:23.481 iops : min= 256, max= 786, avg=426.20, stdev=133.03, samples=20 00:25:23.481 lat (msec) : 10=0.05%, 20=0.32%, 50=1.69%, 100=16.92%, 250=75.77% 00:25:23.481 lat (msec) : 500=5.25% 00:25:23.481 cpu : usr=1.08%, sys=1.41%, ctx=1323, majf=0, minf=1 00:25:23.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:23.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.481 issued rwts: total=0,4325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.481 job8: (groupid=0, jobs=1): err= 0: pid=2674853: Sun Jun 9 09:03:45 2024 00:25:23.481 write: IOPS=548, BW=137MiB/s (144MB/s)(1382MiB/10076msec); 0 zone resets 00:25:23.481 slat (usec): min=28, max=72595, avg=1721.54, stdev=3575.12 00:25:23.481 clat (msec): min=24, max=225, avg=114.84, stdev=35.95 00:25:23.481 lat (msec): min=26, max=225, avg=116.56, stdev=36.32 00:25:23.481 clat percentiles (msec): 00:25:23.481 | 1.00th=[ 48], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 78], 00:25:23.481 | 30.00th=[ 81], 40.00th=[ 97], 50.00th=[ 118], 60.00th=[ 129], 00:25:23.481 | 70.00th=[ 142], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 169], 00:25:23.481 | 99.00th=[ 188], 99.50th=[ 205], 99.90th=[ 220], 99.95th=[ 220], 00:25:23.481 | 99.99th=[ 226] 00:25:23.481 bw ( KiB/s): min=100352, max=215040, per=10.01%, avg=139929.60, stdev=41491.49, samples=20 00:25:23.481 iops : min= 392, max= 840, avg=546.60, stdev=162.08, samples=20 00:25:23.481 lat (msec) : 50=1.05%, 100=41.65%, 250=57.30% 00:25:23.481 cpu : usr=1.11%, sys=1.70%, ctx=1608, majf=0, minf=1 00:25:23.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:23.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.482 issued rwts: total=0,5529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.482 job9: (groupid=0, jobs=1): err= 0: pid=2674854: Sun Jun 9 09:03:45 2024 00:25:23.482 write: IOPS=418, BW=105MiB/s (110MB/s)(1057MiB/10096msec); 0 zone resets 00:25:23.482 slat (usec): min=20, max=75172, avg=2032.68, stdev=5464.09 00:25:23.482 clat (msec): min=3, max=475, avg=150.82, stdev=83.15 00:25:23.482 lat (msec): min=4, max=475, avg=152.85, stdev=83.94 00:25:23.482 clat percentiles (msec): 00:25:23.482 | 1.00th=[ 24], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 80], 00:25:23.482 | 30.00th=[ 90], 40.00th=[ 102], 50.00th=[ 116], 60.00th=[ 159], 00:25:23.482 | 70.00th=[ 207], 80.00th=[ 230], 90.00th=[ 245], 95.00th=[ 288], 00:25:23.482 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 456], 99.95th=[ 456], 00:25:23.482 | 99.99th=[ 477] 00:25:23.482 bw ( KiB/s): min=57344, max=220672, per=7.62%, avg=106572.80, stdev=50164.53, samples=20 00:25:23.482 iops : min= 224, max= 862, avg=416.30, stdev=195.96, samples=20 00:25:23.482 lat (msec) : 4=0.02%, 10=0.14%, 20=0.64%, 50=1.11%, 100=37.20% 00:25:23.482 lat (msec) : 250=52.30%, 500=8.59% 00:25:23.482 cpu : usr=1.09%, sys=1.13%, ctx=1496, majf=0, minf=1 00:25:23.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:23.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.482 issued rwts: total=0,4226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.482 job10: (groupid=0, jobs=1): err= 0: pid=2674855: Sun Jun 9 09:03:45 2024 00:25:23.482 write: IOPS=578, BW=145MiB/s (152MB/s)(1461MiB/10094msec); 0 zone resets 00:25:23.482 slat (usec): min=33, max=83042, avg=1658.45, stdev=3361.33 00:25:23.482 clat (msec): min=18, max=217, avg=108.81, stdev=37.35 00:25:23.482 lat (msec): min=18, max=217, avg=110.47, stdev=37.76 00:25:23.482 clat percentiles (msec): 00:25:23.482 | 1.00th=[ 47], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 77], 00:25:23.482 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 114], 00:25:23.482 | 70.00th=[ 142], 80.00th=[ 148], 90.00th=[ 159], 95.00th=[ 169], 00:25:23.482 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 213], 99.95th=[ 218], 00:25:23.482 | 99.99th=[ 218] 00:25:23.482 bw ( KiB/s): min=90112, max=245248, per=10.58%, avg=147942.40, stdev=47597.44, samples=20 00:25:23.482 iops : min= 352, max= 958, avg=577.90, stdev=185.93, samples=20 00:25:23.482 lat (msec) : 20=0.10%, 50=1.69%, 100=53.25%, 250=44.95% 00:25:23.482 cpu : usr=1.31%, sys=1.74%, ctx=1636, majf=0, minf=1 00:25:23.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:23.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.482 issued rwts: total=0,5842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.482 00:25:23.482 Run status group 0 (all jobs): 00:25:23.482 WRITE: bw=1366MiB/s (1432MB/s), 75.2MiB/s-187MiB/s (78.8MB/s-196MB/s), io=13.6GiB (14.6GB), run=10076-10169msec 00:25:23.482 00:25:23.482 Disk stats (read/write): 00:25:23.482 nvme0n1: ios=46/14886, merge=0/0, ticks=4021/1201889, in_queue=1205910, util=99.63% 00:25:23.482 nvme10n1: ios=49/5968, merge=0/0, ticks=344/1099303, in_queue=1099647, util=98.53% 00:25:23.482 nvme1n1: ios=47/7357, merge=0/0, ticks=1358/1196318, in_queue=1197676, util=99.87% 00:25:23.482 nvme2n1: ios=27/10841, merge=0/0, ticks=71/1212899, in_queue=1212970, util=98.01% 00:25:23.482 nvme3n1: ios=25/12768, merge=0/0, ticks=469/1208206, in_queue=1208675, util=98.92% 00:25:23.482 nvme4n1: ios=0/8132, merge=0/0, ticks=0/1209488, in_queue=1209488, util=98.14% 00:25:23.482 nvme5n1: ios=24/9928, merge=0/0, ticks=1331/1201886, in_queue=1203217, util=100.00% 00:25:23.482 nvme6n1: ios=0/8451, merge=0/0, ticks=0/1207223, in_queue=1207223, util=98.41% 00:25:23.482 nvme7n1: ios=41/10766, merge=0/0, ticks=3448/1203187, in_queue=1206635, util=99.94% 00:25:23.482 nvme8n1: ios=0/8263, merge=0/0, ticks=0/1206703, in_queue=1206703, util=98.94% 00:25:23.482 nvme9n1: ios=48/11457, merge=0/0, ticks=409/1208876, in_queue=1209285, util=99.96% 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:23.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK1 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK1 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:23.482 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK2 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK2 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.482 09:03:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:23.744 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK3 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK3 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.744 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:24.005 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:24.005 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:24.005 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:24.005 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:24.005 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK4 00:25:24.005 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:24.005 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK4 00:25:24.005 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:24.005 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:24.006 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.006 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.006 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.006 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.006 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:24.266 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK5 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK5 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.266 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:24.527 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:24.527 09:03:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:24.528 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:24.528 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:24.528 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK6 00:25:24.528 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK6 00:25:24.528 09:03:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:24.528 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:24.528 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:24.528 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.528 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.528 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.528 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.528 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:24.789 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK7 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK7 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.789 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:25.049 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:25.049 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:25.049 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:25.049 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK8 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK8 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.050 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:25.310 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK9 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK9 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:25.310 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK10 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK10 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.310 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:25.570 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK11 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK11 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.570 09:03:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.570 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.570 09:03:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:25.570 09:03:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:25.570 09:03:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.571 rmmod nvme_tcp 00:25:25.571 rmmod nvme_fabrics 00:25:25.571 rmmod nvme_keyring 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2663420 ']' 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2663420 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@949 -- # '[' -z 2663420 ']' 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # kill -0 2663420 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # uname 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2663420 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2663420' 00:25:25.571 killing process with pid 2663420 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@968 -- # kill 2663420 00:25:25.571 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@973 -- # wait 2663420 00:25:26.141 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:26.141 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:26.141 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:26.141 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.141 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:26.141 09:03:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.141 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.141 09:03:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.053 09:03:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.053 00:25:28.053 real 1m17.414s 00:25:28.053 user 4m54.073s 00:25:28.053 sys 0m22.107s 00:25:28.053 09:03:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:28.053 09:03:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.053 ************************************ 00:25:28.053 END TEST nvmf_multiconnection 00:25:28.053 ************************************ 00:25:28.053 09:03:50 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:28.053 09:03:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:28.053 09:03:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:28.054 09:03:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.054 ************************************ 00:25:28.054 START TEST nvmf_initiator_timeout 00:25:28.054 ************************************ 00:25:28.054 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:28.316 * Looking for test storage... 00:25:28.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.316 09:03:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:34.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:34.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:34.908 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:34.908 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.908 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:25:34.909 00:25:34.909 --- 10.0.0.2 ping statistics --- 00:25:34.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.909 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:25:34.909 00:25:34.909 --- 10.0.0.1 ping statistics --- 00:25:34.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.909 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2681414 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2681414 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@830 -- # '[' -z 2681414 ']' 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:34.909 09:03:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.170 [2024-06-09 09:03:57.479551] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:25:35.170 [2024-06-09 09:03:57.479615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.170 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.170 [2024-06-09 09:03:57.551951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:35.170 [2024-06-09 09:03:57.626643] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.170 [2024-06-09 09:03:57.626681] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.170 [2024-06-09 09:03:57.626688] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.170 [2024-06-09 09:03:57.626695] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.170 [2024-06-09 09:03:57.626701] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.170 [2024-06-09 09:03:57.626839] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.170 [2024-06-09 09:03:57.626957] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.170 [2024-06-09 09:03:57.627113] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.170 [2024-06-09 09:03:57.627114] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.742 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:35.742 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@863 -- # return 0 00:25:35.742 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.742 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:35.742 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 Malloc0 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 Delay0 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 [2024-06-09 09:03:58.344119] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.004 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.004 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.004 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:36.004 [2024-06-09 09:03:58.384391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.004 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.004 09:03:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:37.389 09:03:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:37.389 09:03:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local i=0 00:25:37.389 09:03:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.389 09:03:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:25:37.389 09:03:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # sleep 2 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # return 0 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2682172 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:40.011 09:04:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:40.011 [global] 00:25:40.011 thread=1 00:25:40.011 invalidate=1 00:25:40.011 rw=write 00:25:40.011 time_based=1 00:25:40.011 runtime=60 00:25:40.011 ioengine=libaio 00:25:40.011 direct=1 00:25:40.011 bs=4096 00:25:40.011 iodepth=1 00:25:40.011 norandommap=0 00:25:40.011 numjobs=1 00:25:40.011 00:25:40.011 verify_dump=1 00:25:40.011 verify_backlog=512 00:25:40.011 verify_state_save=0 00:25:40.011 do_verify=1 00:25:40.011 verify=crc32c-intel 00:25:40.011 [job0] 00:25:40.011 filename=/dev/nvme0n1 00:25:40.011 Could not set queue depth (nvme0n1) 00:25:40.011 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:40.011 fio-3.35 00:25:40.011 Starting 1 thread 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.667 true 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.667 true 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.667 true 00:25:42.667 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.668 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:42.668 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.668 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.668 true 00:25:42.668 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.668 09:04:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:45.969 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:45.969 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.969 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.969 true 00:25:45.969 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.969 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.970 true 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.970 true 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.970 09:04:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.970 true 00:25:45.970 09:04:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.970 09:04:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:45.970 09:04:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2682172 00:26:42.251 00:26:42.251 job0: (groupid=0, jobs=1): err= 0: pid=2682371: Sun Jun 9 09:05:02 2024 00:26:42.251 read: IOPS=56, BW=225KiB/s (230kB/s)(13.2MiB/60001msec) 00:26:42.251 slat (usec): min=7, max=10425, avg=31.32, stdev=232.05 00:26:42.251 clat (usec): min=1004, max=41807k, avg=16697.35, stdev=720057.68 00:26:42.251 lat (usec): min=1040, max=41807k, avg=16728.67, stdev=720057.60 00:26:42.251 clat percentiles (usec): 00:26:42.251 | 1.00th=[ 1139], 5.00th=[ 1237], 10.00th=[ 1270], 00:26:42.251 | 20.00th=[ 1303], 30.00th=[ 1319], 40.00th=[ 1336], 00:26:42.251 | 50.00th=[ 1336], 60.00th=[ 1352], 70.00th=[ 1369], 00:26:42.251 | 80.00th=[ 1385], 90.00th=[ 1418], 95.00th=[ 41681], 00:26:42.251 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:42.251 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:42.251 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60001msec); 0 zone resets 00:26:42.251 slat (usec): min=9, max=32141, avg=42.08, stdev=536.33 00:26:42.251 clat (usec): min=606, max=1186, avg=947.84, stdev=57.39 00:26:42.251 lat (usec): min=639, max=33130, avg=989.92, stdev=540.11 00:26:42.251 clat percentiles (usec): 00:26:42.251 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 873], 20.00th=[ 914], 00:26:42.251 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:26:42.251 | 70.00th=[ 971], 80.00th=[ 979], 90.00th=[ 996], 95.00th=[ 1012], 00:26:42.251 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1188], 00:26:42.251 | 99.99th=[ 1188] 00:26:42.251 bw ( KiB/s): min= 72, max= 4024, per=100.00%, avg=2179.69, stdev=1474.80, samples=13 00:26:42.251 iops : min= 18, max= 1006, avg=544.92, stdev=368.70, samples=13 00:26:42.251 lat (usec) : 750=0.40%, 1000=46.30% 00:26:42.251 lat (msec) : 2=49.72%, 4=0.01%, 10=0.01%, 50=3.54%, >=2000=0.01% 00:26:42.251 cpu : usr=0.17%, sys=0.39%, ctx=6961, majf=0, minf=1 00:26:42.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:42.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.251 issued rwts: total=3371,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.251 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:42.251 00:26:42.251 Run status group 0 (all jobs): 00:26:42.251 READ: bw=225KiB/s (230kB/s), 225KiB/s-225KiB/s (230kB/s-230kB/s), io=13.2MiB (13.8MB), run=60001-60001msec 00:26:42.251 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60001-60001msec 00:26:42.251 00:26:42.251 Disk stats (read/write): 00:26:42.251 nvme0n1: ios=3349/3584, merge=0/0, ticks=15826/3196, in_queue=19022, util=99.72% 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:42.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # local i=0 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1230 -- # return 0 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:42.251 nvmf hotplug test: fio successful as expected 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.251 rmmod nvme_tcp 00:26:42.251 rmmod nvme_fabrics 00:26:42.251 rmmod nvme_keyring 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2681414 ']' 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2681414 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@949 -- # '[' -z 2681414 ']' 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # kill -0 2681414 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # uname 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:42.251 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2681414 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2681414' 00:26:42.252 killing process with pid 2681414 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # kill 2681414 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # wait 2681414 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.252 09:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.513 09:05:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:42.513 00:26:42.513 real 1m14.395s 00:26:42.513 user 4m38.217s 00:26:42.513 sys 0m6.712s 00:26:42.513 09:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:42.513 09:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.513 ************************************ 00:26:42.513 END TEST nvmf_initiator_timeout 00:26:42.514 ************************************ 00:26:42.514 09:05:04 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:42.514 09:05:04 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:42.514 09:05:04 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:42.514 09:05:04 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:42.514 09:05:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.103 09:05:11 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:49.365 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:49.365 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:49.365 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:49.365 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:49.365 09:05:11 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:49.365 09:05:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:49.365 09:05:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:49.365 09:05:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.365 ************************************ 00:26:49.365 START TEST nvmf_perf_adq 00:26:49.365 ************************************ 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:49.365 * Looking for test storage... 00:26:49.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.365 09:05:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.951 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:55.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:55.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:55.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:55.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:55.952 09:05:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:57.338 09:05:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:59.253 09:05:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:04.551 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.551 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:04.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:04.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:04.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.552 09:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.552 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.552 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.803 ms 00:27:04.813 00:27:04.813 --- 10.0.0.2 ping statistics --- 00:27:04.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.813 rtt min/avg/max/mdev = 0.803/0.803/0.803/0.000 ms 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.446 ms 00:27:04.813 00:27:04.813 --- 10.0.0.1 ping statistics --- 00:27:04.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.813 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2703315 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2703315 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 2703315 ']' 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:04.813 09:05:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.814 [2024-06-09 09:05:27.240247] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:04.814 [2024-06-09 09:05:27.240314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.814 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.814 [2024-06-09 09:05:27.313559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.075 [2024-06-09 09:05:27.387128] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.075 [2024-06-09 09:05:27.387166] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.075 [2024-06-09 09:05:27.387173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.075 [2024-06-09 09:05:27.387180] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.075 [2024-06-09 09:05:27.387185] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.075 [2024-06-09 09:05:27.387327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.075 [2024-06-09 09:05:27.387446] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.075 [2024-06-09 09:05:27.387607] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.075 [2024-06-09 09:05:27.387606] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.647 [2024-06-09 09:05:28.196354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.647 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.908 Malloc1 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.908 [2024-06-09 09:05:28.255708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2703665 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:05.908 09:05:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:05.908 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:07.819 "tick_rate": 2400000000, 00:27:07.819 "poll_groups": [ 00:27:07.819 { 00:27:07.819 "name": "nvmf_tgt_poll_group_000", 00:27:07.819 "admin_qpairs": 1, 00:27:07.819 "io_qpairs": 1, 00:27:07.819 "current_admin_qpairs": 1, 00:27:07.819 "current_io_qpairs": 1, 00:27:07.819 "pending_bdev_io": 0, 00:27:07.819 "completed_nvme_io": 20150, 00:27:07.819 "transports": [ 00:27:07.819 { 00:27:07.819 "trtype": "TCP" 00:27:07.819 } 00:27:07.819 ] 00:27:07.819 }, 00:27:07.819 { 00:27:07.819 "name": "nvmf_tgt_poll_group_001", 00:27:07.819 "admin_qpairs": 0, 00:27:07.819 "io_qpairs": 1, 00:27:07.819 "current_admin_qpairs": 0, 00:27:07.819 "current_io_qpairs": 1, 00:27:07.819 "pending_bdev_io": 0, 00:27:07.819 "completed_nvme_io": 28793, 00:27:07.819 "transports": [ 00:27:07.819 { 00:27:07.819 "trtype": "TCP" 00:27:07.819 } 00:27:07.819 ] 00:27:07.819 }, 00:27:07.819 { 00:27:07.819 "name": "nvmf_tgt_poll_group_002", 00:27:07.819 "admin_qpairs": 0, 00:27:07.819 "io_qpairs": 1, 00:27:07.819 "current_admin_qpairs": 0, 00:27:07.819 "current_io_qpairs": 1, 00:27:07.819 "pending_bdev_io": 0, 00:27:07.819 "completed_nvme_io": 21539, 00:27:07.819 "transports": [ 00:27:07.819 { 00:27:07.819 "trtype": "TCP" 00:27:07.819 } 00:27:07.819 ] 00:27:07.819 }, 00:27:07.819 { 00:27:07.819 "name": "nvmf_tgt_poll_group_003", 00:27:07.819 "admin_qpairs": 0, 00:27:07.819 "io_qpairs": 1, 00:27:07.819 "current_admin_qpairs": 0, 00:27:07.819 "current_io_qpairs": 1, 00:27:07.819 "pending_bdev_io": 0, 00:27:07.819 "completed_nvme_io": 19916, 00:27:07.819 "transports": [ 00:27:07.819 { 00:27:07.819 "trtype": "TCP" 00:27:07.819 } 00:27:07.819 ] 00:27:07.819 } 00:27:07.819 ] 00:27:07.819 }' 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:07.819 09:05:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2703665 00:27:15.958 Initializing NVMe Controllers 00:27:15.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:15.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:15.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:15.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:15.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:15.958 Initialization complete. Launching workers. 00:27:15.958 ======================================================== 00:27:15.958 Latency(us) 00:27:15.958 Device Information : IOPS MiB/s Average min max 00:27:15.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13753.23 53.72 4653.81 1502.57 9106.09 00:27:15.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14912.91 58.25 4300.19 1427.06 45892.97 00:27:15.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12977.24 50.69 4931.49 1123.22 13840.83 00:27:15.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11290.86 44.10 5668.33 1926.16 17333.76 00:27:15.958 ======================================================== 00:27:15.958 Total : 52934.24 206.77 4838.66 1123.22 45892.97 00:27:15.958 00:27:15.958 09:05:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:15.958 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:15.958 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:15.958 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.958 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:15.958 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.958 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.958 rmmod nvme_tcp 00:27:15.958 rmmod nvme_fabrics 00:27:16.219 rmmod nvme_keyring 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2703315 ']' 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2703315 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 2703315 ']' 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 2703315 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2703315 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2703315' 00:27:16.219 killing process with pid 2703315 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 2703315 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 2703315 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.219 09:05:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.762 09:05:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:18.762 09:05:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:18.762 09:05:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:20.233 09:05:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:22.147 09:05:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.439 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:27.440 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:27.440 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:27.440 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:27.440 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:27:27.440 00:27:27.440 --- 10.0.0.2 ping statistics --- 00:27:27.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.440 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:27:27.440 00:27:27.440 --- 10.0.0.1 ping statistics --- 00:27:27.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.440 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:27.440 net.core.busy_poll = 1 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:27.440 net.core.busy_read = 1 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:27.440 09:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2708131 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2708131 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 2708131 ']' 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:27.702 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.702 [2024-06-09 09:05:50.090830] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:27.702 [2024-06-09 09:05:50.090893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.702 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.702 [2024-06-09 09:05:50.157959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.702 [2024-06-09 09:05:50.226861] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.702 [2024-06-09 09:05:50.226899] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.702 [2024-06-09 09:05:50.226906] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.702 [2024-06-09 09:05:50.226913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.702 [2024-06-09 09:05:50.226918] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.702 [2024-06-09 09:05:50.227054] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.702 [2024-06-09 09:05:50.227196] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.702 [2024-06-09 09:05:50.227332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.702 [2024-06-09 09:05:50.227333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.665 09:05:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 [2024-06-09 09:05:51.035641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 Malloc1 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.666 [2024-06-09 09:05:51.095095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2708476 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:28.666 09:05:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:28.666 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.579 09:05:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:30.579 09:05:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.579 09:05:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.579 09:05:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.579 09:05:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:30.579 "tick_rate": 2400000000, 00:27:30.579 "poll_groups": [ 00:27:30.579 { 00:27:30.579 "name": "nvmf_tgt_poll_group_000", 00:27:30.579 "admin_qpairs": 1, 00:27:30.579 "io_qpairs": 2, 00:27:30.579 "current_admin_qpairs": 1, 00:27:30.579 "current_io_qpairs": 2, 00:27:30.579 "pending_bdev_io": 0, 00:27:30.579 "completed_nvme_io": 29852, 00:27:30.579 "transports": [ 00:27:30.579 { 00:27:30.579 "trtype": "TCP" 00:27:30.579 } 00:27:30.579 ] 00:27:30.579 }, 00:27:30.579 { 00:27:30.579 "name": "nvmf_tgt_poll_group_001", 00:27:30.579 "admin_qpairs": 0, 00:27:30.579 "io_qpairs": 2, 00:27:30.579 "current_admin_qpairs": 0, 00:27:30.579 "current_io_qpairs": 2, 00:27:30.579 "pending_bdev_io": 0, 00:27:30.579 "completed_nvme_io": 38628, 00:27:30.579 "transports": [ 00:27:30.579 { 00:27:30.579 "trtype": "TCP" 00:27:30.579 } 00:27:30.579 ] 00:27:30.579 }, 00:27:30.579 { 00:27:30.579 "name": "nvmf_tgt_poll_group_002", 00:27:30.579 "admin_qpairs": 0, 00:27:30.579 "io_qpairs": 0, 00:27:30.579 "current_admin_qpairs": 0, 00:27:30.579 "current_io_qpairs": 0, 00:27:30.579 "pending_bdev_io": 0, 00:27:30.579 "completed_nvme_io": 0, 00:27:30.579 "transports": [ 00:27:30.579 { 00:27:30.579 "trtype": "TCP" 00:27:30.579 } 00:27:30.579 ] 00:27:30.579 }, 00:27:30.579 { 00:27:30.579 "name": "nvmf_tgt_poll_group_003", 00:27:30.579 "admin_qpairs": 0, 00:27:30.579 "io_qpairs": 0, 00:27:30.579 "current_admin_qpairs": 0, 00:27:30.579 "current_io_qpairs": 0, 00:27:30.579 "pending_bdev_io": 0, 00:27:30.579 "completed_nvme_io": 0, 00:27:30.579 "transports": [ 00:27:30.579 { 00:27:30.579 "trtype": "TCP" 00:27:30.579 } 00:27:30.579 ] 00:27:30.579 } 00:27:30.579 ] 00:27:30.579 }' 00:27:30.579 09:05:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:30.579 09:05:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:30.839 09:05:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:30.839 09:05:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:30.839 09:05:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2708476 00:27:38.975 Initializing NVMe Controllers 00:27:38.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:38.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:38.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:38.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:38.975 Initialization complete. Launching workers. 00:27:38.975 ======================================================== 00:27:38.975 Latency(us) 00:27:38.975 Device Information : IOPS MiB/s Average min max 00:27:38.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12792.80 49.97 5002.62 1009.81 50709.91 00:27:38.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7507.30 29.33 8534.97 1343.20 54121.16 00:27:38.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9324.00 36.42 6883.68 1446.82 50653.44 00:27:38.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9831.20 38.40 6509.95 1522.73 49506.82 00:27:38.975 ======================================================== 00:27:38.975 Total : 39455.30 154.12 6494.85 1009.81 54121.16 00:27:38.975 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:38.975 rmmod nvme_tcp 00:27:38.975 rmmod nvme_fabrics 00:27:38.975 rmmod nvme_keyring 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2708131 ']' 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2708131 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 2708131 ']' 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 2708131 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2708131 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2708131' 00:27:38.975 killing process with pid 2708131 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 2708131 00:27:38.975 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 2708131 00:27:39.236 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:39.236 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.236 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.236 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.236 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.236 09:06:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.236 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.236 09:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.152 09:06:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:41.152 09:06:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:41.152 00:27:41.152 real 0m51.929s 00:27:41.152 user 2m49.733s 00:27:41.152 sys 0m10.369s 00:27:41.152 09:06:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:41.152 09:06:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.152 ************************************ 00:27:41.152 END TEST nvmf_perf_adq 00:27:41.152 ************************************ 00:27:41.152 09:06:03 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:41.152 09:06:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:41.152 09:06:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:41.152 09:06:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:41.413 ************************************ 00:27:41.413 START TEST nvmf_shutdown 00:27:41.413 ************************************ 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:41.413 * Looking for test storage... 00:27:41.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.413 09:06:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:41.414 ************************************ 00:27:41.414 START TEST nvmf_shutdown_tc1 00:27:41.414 ************************************ 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:41.414 09:06:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:49.613 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:49.613 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:49.613 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:49.613 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.613 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.614 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:49.614 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.614 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.614 09:06:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:49.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:27:49.614 00:27:49.614 --- 10.0.0.2 ping statistics --- 00:27:49.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.614 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:27:49.614 00:27:49.614 --- 10.0.0.1 ping statistics --- 00:27:49.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.614 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2715129 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2715129 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2715129 ']' 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:49.614 [2024-06-09 09:06:11.143242] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:49.614 [2024-06-09 09:06:11.143311] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.614 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.614 [2024-06-09 09:06:11.231548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.614 [2024-06-09 09:06:11.326590] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.614 [2024-06-09 09:06:11.326643] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.614 [2024-06-09 09:06:11.326652] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.614 [2024-06-09 09:06:11.326659] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.614 [2024-06-09 09:06:11.326665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.614 [2024-06-09 09:06:11.326802] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.614 [2024-06-09 09:06:11.326972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.614 [2024-06-09 09:06:11.327141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.614 [2024-06-09 09:06:11.327141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:49.614 [2024-06-09 09:06:11.960838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.614 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:49.614 Malloc1 00:27:49.614 [2024-06-09 09:06:12.064295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.614 Malloc2 00:27:49.614 Malloc3 00:27:49.614 Malloc4 00:27:49.875 Malloc5 00:27:49.875 Malloc6 00:27:49.875 Malloc7 00:27:49.875 Malloc8 00:27:49.875 Malloc9 00:27:49.875 Malloc10 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2715554 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2715554 /var/tmp/bdevperf.sock 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2715554 ']' 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.136 { 00:27:50.136 "params": { 00:27:50.136 "name": "Nvme$subsystem", 00:27:50.136 "trtype": "$TEST_TRANSPORT", 00:27:50.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.136 "adrfam": "ipv4", 00:27:50.136 "trsvcid": "$NVMF_PORT", 00:27:50.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.136 "hdgst": ${hdgst:-false}, 00:27:50.136 "ddgst": ${ddgst:-false} 00:27:50.136 }, 00:27:50.136 "method": "bdev_nvme_attach_controller" 00:27:50.136 } 00:27:50.136 EOF 00:27:50.136 )") 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.136 { 00:27:50.136 "params": { 00:27:50.136 "name": "Nvme$subsystem", 00:27:50.136 "trtype": "$TEST_TRANSPORT", 00:27:50.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.136 "adrfam": "ipv4", 00:27:50.136 "trsvcid": "$NVMF_PORT", 00:27:50.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.136 "hdgst": ${hdgst:-false}, 00:27:50.136 "ddgst": ${ddgst:-false} 00:27:50.136 }, 00:27:50.136 "method": "bdev_nvme_attach_controller" 00:27:50.136 } 00:27:50.136 EOF 00:27:50.136 )") 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.136 { 00:27:50.136 "params": { 00:27:50.136 "name": "Nvme$subsystem", 00:27:50.136 "trtype": "$TEST_TRANSPORT", 00:27:50.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.136 "adrfam": "ipv4", 00:27:50.136 "trsvcid": "$NVMF_PORT", 00:27:50.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.136 "hdgst": ${hdgst:-false}, 00:27:50.136 "ddgst": ${ddgst:-false} 00:27:50.136 }, 00:27:50.136 "method": "bdev_nvme_attach_controller" 00:27:50.136 } 00:27:50.136 EOF 00:27:50.136 )") 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.136 { 00:27:50.136 "params": { 00:27:50.136 "name": "Nvme$subsystem", 00:27:50.136 "trtype": "$TEST_TRANSPORT", 00:27:50.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.136 "adrfam": "ipv4", 00:27:50.136 "trsvcid": "$NVMF_PORT", 00:27:50.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.136 "hdgst": ${hdgst:-false}, 00:27:50.136 "ddgst": ${ddgst:-false} 00:27:50.136 }, 00:27:50.136 "method": "bdev_nvme_attach_controller" 00:27:50.136 } 00:27:50.136 EOF 00:27:50.136 )") 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.136 { 00:27:50.136 "params": { 00:27:50.136 "name": "Nvme$subsystem", 00:27:50.136 "trtype": "$TEST_TRANSPORT", 00:27:50.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.136 "adrfam": "ipv4", 00:27:50.136 "trsvcid": "$NVMF_PORT", 00:27:50.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.136 "hdgst": ${hdgst:-false}, 00:27:50.136 "ddgst": ${ddgst:-false} 00:27:50.136 }, 00:27:50.136 "method": "bdev_nvme_attach_controller" 00:27:50.136 } 00:27:50.136 EOF 00:27:50.136 )") 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.136 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.136 { 00:27:50.136 "params": { 00:27:50.137 "name": "Nvme$subsystem", 00:27:50.137 "trtype": "$TEST_TRANSPORT", 00:27:50.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "$NVMF_PORT", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.137 "hdgst": ${hdgst:-false}, 00:27:50.137 "ddgst": ${ddgst:-false} 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 } 00:27:50.137 EOF 00:27:50.137 )") 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.137 { 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme$subsystem", 00:27:50.137 "trtype": "$TEST_TRANSPORT", 00:27:50.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "$NVMF_PORT", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.137 "hdgst": ${hdgst:-false}, 00:27:50.137 "ddgst": ${ddgst:-false} 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 } 00:27:50.137 EOF 00:27:50.137 )") 00:27:50.137 [2024-06-09 09:06:12.526236] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:50.137 [2024-06-09 09:06:12.526303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.137 { 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme$subsystem", 00:27:50.137 "trtype": "$TEST_TRANSPORT", 00:27:50.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "$NVMF_PORT", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.137 "hdgst": ${hdgst:-false}, 00:27:50.137 "ddgst": ${ddgst:-false} 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 } 00:27:50.137 EOF 00:27:50.137 )") 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.137 { 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme$subsystem", 00:27:50.137 "trtype": "$TEST_TRANSPORT", 00:27:50.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "$NVMF_PORT", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.137 "hdgst": ${hdgst:-false}, 00:27:50.137 "ddgst": ${ddgst:-false} 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 } 00:27:50.137 EOF 00:27:50.137 )") 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.137 { 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme$subsystem", 00:27:50.137 "trtype": "$TEST_TRANSPORT", 00:27:50.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "$NVMF_PORT", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.137 "hdgst": ${hdgst:-false}, 00:27:50.137 "ddgst": ${ddgst:-false} 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 } 00:27:50.137 EOF 00:27:50.137 )") 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:50.137 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:50.137 09:06:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme1", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme2", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme3", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme4", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme5", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme6", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme7", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme8", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme9", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 },{ 00:27:50.137 "params": { 00:27:50.137 "name": "Nvme10", 00:27:50.137 "trtype": "tcp", 00:27:50.137 "traddr": "10.0.0.2", 00:27:50.137 "adrfam": "ipv4", 00:27:50.137 "trsvcid": "4420", 00:27:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:50.137 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:50.137 "hdgst": false, 00:27:50.137 "ddgst": false 00:27:50.137 }, 00:27:50.137 "method": "bdev_nvme_attach_controller" 00:27:50.137 }' 00:27:50.137 [2024-06-09 09:06:12.589312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.137 [2024-06-09 09:06:12.653810] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2715554 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:51.521 09:06:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:52.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2715554 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2715129 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.462 { 00:27:52.462 "params": { 00:27:52.462 "name": "Nvme$subsystem", 00:27:52.462 "trtype": "$TEST_TRANSPORT", 00:27:52.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.462 "adrfam": "ipv4", 00:27:52.462 "trsvcid": "$NVMF_PORT", 00:27:52.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.462 "hdgst": ${hdgst:-false}, 00:27:52.462 "ddgst": ${ddgst:-false} 00:27:52.462 }, 00:27:52.462 "method": "bdev_nvme_attach_controller" 00:27:52.462 } 00:27:52.462 EOF 00:27:52.462 )") 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.462 { 00:27:52.462 "params": { 00:27:52.462 "name": "Nvme$subsystem", 00:27:52.462 "trtype": "$TEST_TRANSPORT", 00:27:52.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.462 "adrfam": "ipv4", 00:27:52.462 "trsvcid": "$NVMF_PORT", 00:27:52.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.462 "hdgst": ${hdgst:-false}, 00:27:52.462 "ddgst": ${ddgst:-false} 00:27:52.462 }, 00:27:52.462 "method": "bdev_nvme_attach_controller" 00:27:52.462 } 00:27:52.462 EOF 00:27:52.462 )") 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.462 { 00:27:52.462 "params": { 00:27:52.462 "name": "Nvme$subsystem", 00:27:52.462 "trtype": "$TEST_TRANSPORT", 00:27:52.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.462 "adrfam": "ipv4", 00:27:52.462 "trsvcid": "$NVMF_PORT", 00:27:52.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.462 "hdgst": ${hdgst:-false}, 00:27:52.462 "ddgst": ${ddgst:-false} 00:27:52.462 }, 00:27:52.462 "method": "bdev_nvme_attach_controller" 00:27:52.462 } 00:27:52.462 EOF 00:27:52.462 )") 00:27:52.462 09:06:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.462 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.462 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.462 { 00:27:52.462 "params": { 00:27:52.462 "name": "Nvme$subsystem", 00:27:52.462 "trtype": "$TEST_TRANSPORT", 00:27:52.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.462 "adrfam": "ipv4", 00:27:52.462 "trsvcid": "$NVMF_PORT", 00:27:52.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.462 "hdgst": ${hdgst:-false}, 00:27:52.462 "ddgst": ${ddgst:-false} 00:27:52.462 }, 00:27:52.462 "method": "bdev_nvme_attach_controller" 00:27:52.462 } 00:27:52.462 EOF 00:27:52.462 )") 00:27:52.462 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.462 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.462 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.462 { 00:27:52.462 "params": { 00:27:52.462 "name": "Nvme$subsystem", 00:27:52.462 "trtype": "$TEST_TRANSPORT", 00:27:52.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.462 "adrfam": "ipv4", 00:27:52.462 "trsvcid": "$NVMF_PORT", 00:27:52.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.462 "hdgst": ${hdgst:-false}, 00:27:52.462 "ddgst": ${ddgst:-false} 00:27:52.462 }, 00:27:52.462 "method": "bdev_nvme_attach_controller" 00:27:52.462 } 00:27:52.462 EOF 00:27:52.462 )") 00:27:52.462 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.462 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.462 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.462 { 00:27:52.462 "params": { 00:27:52.462 "name": "Nvme$subsystem", 00:27:52.462 "trtype": "$TEST_TRANSPORT", 00:27:52.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.463 "adrfam": "ipv4", 00:27:52.463 "trsvcid": "$NVMF_PORT", 00:27:52.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.463 "hdgst": ${hdgst:-false}, 00:27:52.463 "ddgst": ${ddgst:-false} 00:27:52.463 }, 00:27:52.463 "method": "bdev_nvme_attach_controller" 00:27:52.463 } 00:27:52.463 EOF 00:27:52.463 )") 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.723 [2024-06-09 09:06:15.023938] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:52.723 [2024-06-09 09:06:15.023987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715939 ] 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.723 { 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme$subsystem", 00:27:52.723 "trtype": "$TEST_TRANSPORT", 00:27:52.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "$NVMF_PORT", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.723 "hdgst": ${hdgst:-false}, 00:27:52.723 "ddgst": ${ddgst:-false} 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 } 00:27:52.723 EOF 00:27:52.723 )") 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.723 { 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme$subsystem", 00:27:52.723 "trtype": "$TEST_TRANSPORT", 00:27:52.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "$NVMF_PORT", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.723 "hdgst": ${hdgst:-false}, 00:27:52.723 "ddgst": ${ddgst:-false} 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 } 00:27:52.723 EOF 00:27:52.723 )") 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.723 { 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme$subsystem", 00:27:52.723 "trtype": "$TEST_TRANSPORT", 00:27:52.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "$NVMF_PORT", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.723 "hdgst": ${hdgst:-false}, 00:27:52.723 "ddgst": ${ddgst:-false} 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 } 00:27:52.723 EOF 00:27:52.723 )") 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.723 { 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme$subsystem", 00:27:52.723 "trtype": "$TEST_TRANSPORT", 00:27:52.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "$NVMF_PORT", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.723 "hdgst": ${hdgst:-false}, 00:27:52.723 "ddgst": ${ddgst:-false} 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 } 00:27:52.723 EOF 00:27:52.723 )") 00:27:52.723 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:52.723 09:06:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme1", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme2", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme3", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme4", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme5", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme6", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme7", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme8", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme9", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:52.723 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:52.723 "hdgst": false, 00:27:52.723 "ddgst": false 00:27:52.723 }, 00:27:52.723 "method": "bdev_nvme_attach_controller" 00:27:52.723 },{ 00:27:52.723 "params": { 00:27:52.723 "name": "Nvme10", 00:27:52.723 "trtype": "tcp", 00:27:52.723 "traddr": "10.0.0.2", 00:27:52.723 "adrfam": "ipv4", 00:27:52.723 "trsvcid": "4420", 00:27:52.723 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:52.724 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:52.724 "hdgst": false, 00:27:52.724 "ddgst": false 00:27:52.724 }, 00:27:52.724 "method": "bdev_nvme_attach_controller" 00:27:52.724 }' 00:27:52.724 [2024-06-09 09:06:15.084006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.724 [2024-06-09 09:06:15.149380] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.108 Running I/O for 1 seconds... 00:27:55.494 00:27:55.494 Latency(us) 00:27:55.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.494 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme1n1 : 1.14 224.37 14.02 0.00 0.00 282450.77 25340.59 270882.13 00:27:55.494 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme2n1 : 1.14 168.70 10.54 0.00 0.00 368800.43 23483.73 356515.84 00:27:55.494 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme3n1 : 1.14 225.20 14.07 0.00 0.00 271348.91 23702.19 251658.24 00:27:55.494 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme4n1 : 1.15 166.52 10.41 0.00 0.00 360915.06 34078.72 347777.71 00:27:55.494 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme5n1 : 1.09 234.60 14.66 0.00 0.00 250213.97 21299.20 256901.12 00:27:55.494 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme6n1 : 1.13 339.85 21.24 0.00 0.00 169980.73 13161.81 222822.40 00:27:55.494 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme7n1 : 1.15 222.26 13.89 0.00 0.00 255739.52 23920.64 258648.75 00:27:55.494 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme8n1 : 1.16 276.79 17.30 0.00 0.00 201728.00 25449.81 270882.13 00:27:55.494 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme9n1 : 1.16 275.52 17.22 0.00 0.00 198903.30 18786.99 239424.85 00:27:55.494 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.494 Verification LBA range: start 0x0 length 0x400 00:27:55.494 Nvme10n1 : 1.24 258.99 16.19 0.00 0.00 209894.06 9502.72 302339.41 00:27:55.494 =================================================================================================================== 00:27:55.494 Total : 2392.79 149.55 0.00 0.00 244201.63 9502.72 356515.84 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:55.494 rmmod nvme_tcp 00:27:55.494 rmmod nvme_fabrics 00:27:55.494 rmmod nvme_keyring 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2715129 ']' 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2715129 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 2715129 ']' 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 2715129 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2715129 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2715129' 00:27:55.494 killing process with pid 2715129 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 2715129 00:27:55.494 09:06:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 2715129 00:27:55.756 09:06:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:55.756 09:06:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:55.756 09:06:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:55.756 09:06:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.756 09:06:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:55.756 09:06:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.756 09:06:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.756 09:06:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.672 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:57.672 00:27:57.672 real 0m16.336s 00:27:57.672 user 0m32.698s 00:27:57.672 sys 0m6.634s 00:27:57.672 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:57.672 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:57.672 ************************************ 00:27:57.672 END TEST nvmf_shutdown_tc1 00:27:57.672 ************************************ 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:57.934 ************************************ 00:27:57.934 START TEST nvmf_shutdown_tc2 00:27:57.934 ************************************ 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:57.934 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:57.934 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.934 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:57.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:57.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.935 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:27:58.196 00:27:58.196 --- 10.0.0.2 ping statistics --- 00:27:58.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.196 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:27:58.196 00:27:58.196 --- 10.0.0.1 ping statistics --- 00:27:58.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.196 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2717220 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2717220 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2717220 ']' 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.196 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:58.197 09:06:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.457 [2024-06-09 09:06:20.758165] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:58.457 [2024-06-09 09:06:20.758232] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.457 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.457 [2024-06-09 09:06:20.842156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.457 [2024-06-09 09:06:20.903529] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.457 [2024-06-09 09:06:20.903564] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.457 [2024-06-09 09:06:20.903570] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.457 [2024-06-09 09:06:20.903574] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.458 [2024-06-09 09:06:20.903578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.458 [2024-06-09 09:06:20.903689] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.458 [2024-06-09 09:06:20.903851] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.458 [2024-06-09 09:06:20.904010] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.458 [2024-06-09 09:06:20.904013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.029 [2024-06-09 09:06:21.580725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:59.029 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.290 09:06:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.290 Malloc1 00:27:59.290 [2024-06-09 09:06:21.679495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.290 Malloc2 00:27:59.290 Malloc3 00:27:59.290 Malloc4 00:27:59.290 Malloc5 00:27:59.290 Malloc6 00:27:59.551 Malloc7 00:27:59.551 Malloc8 00:27:59.551 Malloc9 00:27:59.551 Malloc10 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2717449 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2717449 /var/tmp/bdevperf.sock 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2717449 ']' 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.551 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.551 { 00:27:59.551 "params": { 00:27:59.551 "name": "Nvme$subsystem", 00:27:59.551 "trtype": "$TEST_TRANSPORT", 00:27:59.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.551 "adrfam": "ipv4", 00:27:59.551 "trsvcid": "$NVMF_PORT", 00:27:59.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.552 "hdgst": ${hdgst:-false}, 00:27:59.552 "ddgst": ${ddgst:-false} 00:27:59.552 }, 00:27:59.552 "method": "bdev_nvme_attach_controller" 00:27:59.552 } 00:27:59.552 EOF 00:27:59.552 )") 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.552 { 00:27:59.552 "params": { 00:27:59.552 "name": "Nvme$subsystem", 00:27:59.552 "trtype": "$TEST_TRANSPORT", 00:27:59.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.552 "adrfam": "ipv4", 00:27:59.552 "trsvcid": "$NVMF_PORT", 00:27:59.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.552 "hdgst": ${hdgst:-false}, 00:27:59.552 "ddgst": ${ddgst:-false} 00:27:59.552 }, 00:27:59.552 "method": "bdev_nvme_attach_controller" 00:27:59.552 } 00:27:59.552 EOF 00:27:59.552 )") 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.552 { 00:27:59.552 "params": { 00:27:59.552 "name": "Nvme$subsystem", 00:27:59.552 "trtype": "$TEST_TRANSPORT", 00:27:59.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.552 "adrfam": "ipv4", 00:27:59.552 "trsvcid": "$NVMF_PORT", 00:27:59.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.552 "hdgst": ${hdgst:-false}, 00:27:59.552 "ddgst": ${ddgst:-false} 00:27:59.552 }, 00:27:59.552 "method": "bdev_nvme_attach_controller" 00:27:59.552 } 00:27:59.552 EOF 00:27:59.552 )") 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.552 { 00:27:59.552 "params": { 00:27:59.552 "name": "Nvme$subsystem", 00:27:59.552 "trtype": "$TEST_TRANSPORT", 00:27:59.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.552 "adrfam": "ipv4", 00:27:59.552 "trsvcid": "$NVMF_PORT", 00:27:59.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.552 "hdgst": ${hdgst:-false}, 00:27:59.552 "ddgst": ${ddgst:-false} 00:27:59.552 }, 00:27:59.552 "method": "bdev_nvme_attach_controller" 00:27:59.552 } 00:27:59.552 EOF 00:27:59.552 )") 00:27:59.552 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.813 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.813 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.813 { 00:27:59.813 "params": { 00:27:59.813 "name": "Nvme$subsystem", 00:27:59.813 "trtype": "$TEST_TRANSPORT", 00:27:59.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.813 "adrfam": "ipv4", 00:27:59.813 "trsvcid": "$NVMF_PORT", 00:27:59.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.813 "hdgst": ${hdgst:-false}, 00:27:59.813 "ddgst": ${ddgst:-false} 00:27:59.813 }, 00:27:59.813 "method": "bdev_nvme_attach_controller" 00:27:59.813 } 00:27:59.813 EOF 00:27:59.813 )") 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.814 { 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme$subsystem", 00:27:59.814 "trtype": "$TEST_TRANSPORT", 00:27:59.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "$NVMF_PORT", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.814 "hdgst": ${hdgst:-false}, 00:27:59.814 "ddgst": ${ddgst:-false} 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 } 00:27:59.814 EOF 00:27:59.814 )") 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.814 [2024-06-09 09:06:22.125326] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:59.814 [2024-06-09 09:06:22.125382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717449 ] 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.814 { 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme$subsystem", 00:27:59.814 "trtype": "$TEST_TRANSPORT", 00:27:59.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "$NVMF_PORT", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.814 "hdgst": ${hdgst:-false}, 00:27:59.814 "ddgst": ${ddgst:-false} 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 } 00:27:59.814 EOF 00:27:59.814 )") 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.814 { 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme$subsystem", 00:27:59.814 "trtype": "$TEST_TRANSPORT", 00:27:59.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "$NVMF_PORT", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.814 "hdgst": ${hdgst:-false}, 00:27:59.814 "ddgst": ${ddgst:-false} 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 } 00:27:59.814 EOF 00:27:59.814 )") 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.814 { 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme$subsystem", 00:27:59.814 "trtype": "$TEST_TRANSPORT", 00:27:59.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "$NVMF_PORT", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.814 "hdgst": ${hdgst:-false}, 00:27:59.814 "ddgst": ${ddgst:-false} 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 } 00:27:59.814 EOF 00:27:59.814 )") 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.814 { 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme$subsystem", 00:27:59.814 "trtype": "$TEST_TRANSPORT", 00:27:59.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "$NVMF_PORT", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.814 "hdgst": ${hdgst:-false}, 00:27:59.814 "ddgst": ${ddgst:-false} 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 } 00:27:59.814 EOF 00:27:59.814 )") 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:59.814 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.814 09:06:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme1", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme2", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme3", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme4", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme5", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme6", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme7", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme8", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme9", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.814 }, 00:27:59.814 "method": "bdev_nvme_attach_controller" 00:27:59.814 },{ 00:27:59.814 "params": { 00:27:59.814 "name": "Nvme10", 00:27:59.814 "trtype": "tcp", 00:27:59.814 "traddr": "10.0.0.2", 00:27:59.814 "adrfam": "ipv4", 00:27:59.814 "trsvcid": "4420", 00:27:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:59.814 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:59.814 "hdgst": false, 00:27:59.814 "ddgst": false 00:27:59.815 }, 00:27:59.815 "method": "bdev_nvme_attach_controller" 00:27:59.815 }' 00:27:59.815 [2024-06-09 09:06:22.194566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.815 [2024-06-09 09:06:22.259058] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.198 Running I/O for 10 seconds... 00:28:01.198 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:01.198 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:28:01.198 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:01.198 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.198 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:01.459 09:06:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=74 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 74 -ge 100 ']' 00:28:01.720 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:01.981 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:01.981 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:01.981 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:01.981 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:01.981 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.981 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.981 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2717449 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2717449 ']' 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2717449 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2717449 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2717449' 00:28:02.241 killing process with pid 2717449 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2717449 00:28:02.241 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2717449 00:28:02.241 Received shutdown signal, test time was about 1.019373 seconds 00:28:02.241 00:28:02.241 Latency(us) 00:28:02.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.242 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme1n1 : 0.97 269.70 16.86 0.00 0.00 234319.13 19988.48 220200.96 00:28:02.242 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme2n1 : 0.97 264.91 16.56 0.00 0.00 233639.68 21408.43 244667.73 00:28:02.242 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme3n1 : 0.96 272.04 17.00 0.00 0.00 221435.41 9666.56 246415.36 00:28:02.242 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme4n1 : 1.00 192.83 12.05 0.00 0.00 309310.58 28180.48 312825.17 00:28:02.242 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme5n1 : 0.97 198.48 12.40 0.00 0.00 292979.77 24576.00 242920.11 00:28:02.242 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme6n1 : 0.99 193.43 12.09 0.00 0.00 295635.63 24139.09 335544.32 00:28:02.242 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme7n1 : 0.99 257.36 16.08 0.00 0.00 216718.61 21080.75 279620.27 00:28:02.242 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme8n1 : 0.96 200.78 12.55 0.00 0.00 270484.48 23483.73 258648.75 00:28:02.242 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme9n1 : 0.98 195.82 12.24 0.00 0.00 272488.11 28617.39 298844.16 00:28:02.242 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.242 Verification LBA range: start 0x0 length 0x400 00:28:02.242 Nvme10n1 : 1.02 251.35 15.71 0.00 0.00 199510.83 18459.31 235929.60 00:28:02.242 =================================================================================================================== 00:28:02.242 Total : 2296.71 143.54 0.00 0.00 249787.39 9666.56 335544.32 00:28:02.502 09:06:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2717220 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:03.445 rmmod nvme_tcp 00:28:03.445 rmmod nvme_fabrics 00:28:03.445 rmmod nvme_keyring 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2717220 ']' 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2717220 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2717220 ']' 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2717220 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:03.445 09:06:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2717220 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2717220' 00:28:03.707 killing process with pid 2717220 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2717220 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2717220 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.707 09:06:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.254 00:28:06.254 real 0m8.012s 00:28:06.254 user 0m23.776s 00:28:06.254 sys 0m1.477s 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.254 ************************************ 00:28:06.254 END TEST nvmf_shutdown_tc2 00:28:06.254 ************************************ 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:06.254 ************************************ 00:28:06.254 START TEST nvmf_shutdown_tc3 00:28:06.254 ************************************ 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:06.254 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:06.255 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:06.255 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:06.255 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:06.255 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:06.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:28:06.255 00:28:06.255 --- 10.0.0.2 ping statistics --- 00:28:06.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.255 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:28:06.255 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:28:06.256 00:28:06.256 --- 10.0.0.1 ping statistics --- 00:28:06.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.256 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2718888 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2718888 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2718888 ']' 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:06.256 09:06:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:06.256 [2024-06-09 09:06:28.767942] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:06.256 [2024-06-09 09:06:28.767989] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.256 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.517 [2024-06-09 09:06:28.850253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.517 [2024-06-09 09:06:28.904852] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.517 [2024-06-09 09:06:28.904881] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.517 [2024-06-09 09:06:28.904887] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.517 [2024-06-09 09:06:28.904892] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.517 [2024-06-09 09:06:28.904896] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.517 [2024-06-09 09:06:28.905003] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.517 [2024-06-09 09:06:28.905162] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.517 [2024-06-09 09:06:28.905319] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.517 [2024-06-09 09:06:28.905321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.094 [2024-06-09 09:06:29.579669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.094 09:06:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.356 Malloc1 00:28:07.356 [2024-06-09 09:06:29.678478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.356 Malloc2 00:28:07.356 Malloc3 00:28:07.356 Malloc4 00:28:07.356 Malloc5 00:28:07.356 Malloc6 00:28:07.356 Malloc7 00:28:07.617 Malloc8 00:28:07.617 Malloc9 00:28:07.617 Malloc10 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2719274 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2719274 /var/tmp/bdevperf.sock 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2719274 ']' 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:07.617 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:07.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 [2024-06-09 09:06:30.108159] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:07.618 [2024-06-09 09:06:30.108206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719274 ] 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:07.618 { 00:28:07.618 "params": { 00:28:07.618 "name": "Nvme$subsystem", 00:28:07.618 "trtype": "$TEST_TRANSPORT", 00:28:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.618 "adrfam": "ipv4", 00:28:07.618 "trsvcid": "$NVMF_PORT", 00:28:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.618 "hdgst": ${hdgst:-false}, 00:28:07.618 "ddgst": ${ddgst:-false} 00:28:07.618 }, 00:28:07.618 "method": "bdev_nvme_attach_controller" 00:28:07.618 } 00:28:07.618 EOF 00:28:07.618 )") 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:07.618 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:07.619 09:06:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme1", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme2", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme3", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme4", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme5", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme6", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme7", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme8", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme9", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 },{ 00:28:07.619 "params": { 00:28:07.619 "name": "Nvme10", 00:28:07.619 "trtype": "tcp", 00:28:07.619 "traddr": "10.0.0.2", 00:28:07.619 "adrfam": "ipv4", 00:28:07.619 "trsvcid": "4420", 00:28:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:07.619 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:07.619 "hdgst": false, 00:28:07.619 "ddgst": false 00:28:07.619 }, 00:28:07.619 "method": "bdev_nvme_attach_controller" 00:28:07.619 }' 00:28:07.619 [2024-06-09 09:06:30.166687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.880 [2024-06-09 09:06:30.231427] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.301 Running I/O for 10 seconds... 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:09.301 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:09.302 09:06:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:09.563 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:09.563 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:09.563 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:09.563 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:09.563 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.563 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.563 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.824 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:09.824 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:09.824 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:09.824 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:09.824 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2718888 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 2718888 ']' 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 2718888 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2718888 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2718888' 00:28:10.101 killing process with pid 2718888 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 2718888 00:28:10.101 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 2718888 00:28:10.101 [2024-06-09 09:06:32.484785] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484835] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484840] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484844] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484899] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484977] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.484999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485022] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485027] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.101 [2024-06-09 09:06:32.485044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485075] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.485110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca2e0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486060] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486107] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486111] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486152] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486165] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486170] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486174] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486197] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486270] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486275] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486296] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.486332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecccc0 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.487927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.487950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.487955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.487960] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.487965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.102 [2024-06-09 09:06:32.487970] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.487975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.487979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.487984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.487988] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.487993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.487998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488034] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488075] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488094] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488130] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488144] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488204] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488208] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488213] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecac20 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488976] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488995] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.488999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489039] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.103 [2024-06-09 09:06:32.489061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489065] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489132] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489155] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489164] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489174] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489178] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489196] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489209] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489213] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489222] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb0e0 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489687] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489701] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489710] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489722] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489731] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489735] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489740] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489753] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489766] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489776] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489820] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489825] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489879] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489884] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.104 [2024-06-09 09:06:32.489902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489906] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489928] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489932] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.489963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb580 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.490955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecbee0 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491462] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491475] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491480] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491485] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491490] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491513] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491522] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491527] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491567] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491572] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491581] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491585] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491593] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491597] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491602] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491611] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491620] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491625] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491629] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491638] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491657] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491661] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491679] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491687] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491700] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.105 [2024-06-09 09:06:32.491709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491724] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491742] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.491756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc820 is same with the state(5) to be set 00:28:10.106 [2024-06-09 09:06:32.492201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.106 [2024-06-09 09:06:32.492812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.106 [2024-06-09 09:06:32.492819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.492991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.492998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.107 [2024-06-09 09:06:32.493291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:10.107 [2024-06-09 09:06:32.493360] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab3940 was disconnected and freed. reset controller. 00:28:10.107 [2024-06-09 09:06:32.493492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5deb0 is same with the state(5) to be set 00:28:10.107 [2024-06-09 09:06:32.493583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.107 [2024-06-09 09:06:32.493646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e8d0 is same with the state(5) to be set 00:28:10.107 [2024-06-09 09:06:32.493671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.107 [2024-06-09 09:06:32.493683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7f40 is same with the state(5) to be set 00:28:10.108 [2024-06-09 09:06:32.493769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be610 is same with the state(5) to be set 00:28:10.108 [2024-06-09 09:06:32.493851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7790 is same with the state(5) to be set 00:28:10.108 [2024-06-09 09:06:32.493934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.493985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.493992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83930 is same with the state(5) to be set 00:28:10.108 [2024-06-09 09:06:32.494013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c835a0 is same with the state(5) to be set 00:28:10.108 [2024-06-09 09:06:32.494095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab9140 is same with the state(5) to be set 00:28:10.108 [2024-06-09 09:06:32.494174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c82a00 is same with the state(5) to be set 00:28:10.108 [2024-06-09 09:06:32.494254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.108 [2024-06-09 09:06:32.494306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.494314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5dcd0 is same with the state(5) to be set 00:28:10.108 [2024-06-09 09:06:32.495274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.108 [2024-06-09 09:06:32.495293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.495305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.108 [2024-06-09 09:06:32.495313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.495322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.108 [2024-06-09 09:06:32.495329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.495338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.108 [2024-06-09 09:06:32.495345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.495355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.108 [2024-06-09 09:06:32.495362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.108 [2024-06-09 09:06:32.495371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.108 [2024-06-09 09:06:32.495378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.495759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.495769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.109 [2024-06-09 09:06:32.511660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.109 [2024-06-09 09:06:32.511668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.511958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.511965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.512018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:10.110 [2024-06-09 09:06:32.512066] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab0f00 was disconnected and freed. reset controller. 00:28:10.110 [2024-06-09 09:06:32.514266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5deb0 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e8d0 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae7f40 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be610 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7790 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83930 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c835a0 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab9140 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c82a00 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5dcd0 (9): Bad file descriptor 00:28:10.110 [2024-06-09 09:06:32.514567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.110 [2024-06-09 09:06:32.514743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.110 [2024-06-09 09:06:32.514753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.514984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.514991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.111 [2024-06-09 09:06:32.515392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.111 [2024-06-09 09:06:32.515399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.515635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.515688] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b80d80 was disconnected and freed. reset controller. 00:28:10.112 [2024-06-09 09:06:32.517048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.112 [2024-06-09 09:06:32.517496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.112 [2024-06-09 09:06:32.517505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.517989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.517998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.113 [2024-06-09 09:06:32.518136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.113 [2024-06-09 09:06:32.518188] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab2420 was disconnected and freed. reset controller. 00:28:10.113 [2024-06-09 09:06:32.518302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:10.113 [2024-06-09 09:06:32.521166] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:10.113 [2024-06-09 09:06:32.521194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:10.114 [2024-06-09 09:06:32.521214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:10.114 [2024-06-09 09:06:32.521955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-06-09 09:06:32.521997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5deb0 with addr=10.0.0.2, port=4420 00:28:10.114 [2024-06-09 09:06:32.522009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5deb0 is same with the state(5) to be set 00:28:10.114 [2024-06-09 09:06:32.522533] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:10.114 [2024-06-09 09:06:32.522573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:10.114 [2024-06-09 09:06:32.523085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-06-09 09:06:32.523100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be610 with addr=10.0.0.2, port=4420 00:28:10.114 [2024-06-09 09:06:32.523108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be610 is same with the state(5) to be set 00:28:10.114 [2024-06-09 09:06:32.523651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-06-09 09:06:32.523688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c835a0 with addr=10.0.0.2, port=4420 00:28:10.114 [2024-06-09 09:06:32.523699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c835a0 is same with the state(5) to be set 00:28:10.114 [2024-06-09 09:06:32.523714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5deb0 (9): Bad file descriptor 00:28:10.114 [2024-06-09 09:06:32.524088] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:10.114 [2024-06-09 09:06:32.524133] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:10.114 [2024-06-09 09:06:32.524172] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:10.114 [2024-06-09 09:06:32.524977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-06-09 09:06:32.524994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c82a00 with addr=10.0.0.2, port=4420 00:28:10.114 [2024-06-09 09:06:32.525001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c82a00 is same with the state(5) to be set 00:28:10.114 [2024-06-09 09:06:32.525011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be610 (9): Bad file descriptor 00:28:10.114 [2024-06-09 09:06:32.525021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c835a0 (9): Bad file descriptor 00:28:10.114 [2024-06-09 09:06:32.525029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:10.114 [2024-06-09 09:06:32.525036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:10.114 [2024-06-09 09:06:32.525044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:10.114 [2024-06-09 09:06:32.525131] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:10.114 [2024-06-09 09:06:32.525200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.114 [2024-06-09 09:06:32.525234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c82a00 (9): Bad file descriptor 00:28:10.114 [2024-06-09 09:06:32.525243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:10.114 [2024-06-09 09:06:32.525249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:10.114 [2024-06-09 09:06:32.525256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:10.114 [2024-06-09 09:06:32.525267] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:10.114 [2024-06-09 09:06:32.525279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:10.114 [2024-06-09 09:06:32.525286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:10.114 [2024-06-09 09:06:32.525324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.114 [2024-06-09 09:06:32.525663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.114 [2024-06-09 09:06:32.525672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.525987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.525996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.115 [2024-06-09 09:06:32.526313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.115 [2024-06-09 09:06:32.526322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.526329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.526338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.526346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.526355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.526362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.526371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.526378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.526386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7fb40 is same with the state(5) to be set 00:28:10.116 [2024-06-09 09:06:32.527675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.527985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.527994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.116 [2024-06-09 09:06:32.528271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.116 [2024-06-09 09:06:32.528278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.528738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.528746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff2f0 is same with the state(5) to be set 00:28:10.117 [2024-06-09 09:06:32.530023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.117 [2024-06-09 09:06:32.530212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.117 [2024-06-09 09:06:32.530221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.118 [2024-06-09 09:06:32.530857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.118 [2024-06-09 09:06:32.530863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.530872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.530879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.530888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.530895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.530904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.530911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.530920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.530927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.530936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.530943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.530952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.530959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.530968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.530975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.530986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.530993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.531002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.531009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.531018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.531025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.531034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.531041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.531050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.531057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.531066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.531073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.531081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00780 is same with the state(5) to be set 00:28:10.119 [2024-06-09 09:06:32.532344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.119 [2024-06-09 09:06:32.532671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.119 [2024-06-09 09:06:32.532680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.532989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.532996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.120 [2024-06-09 09:06:32.533324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.120 [2024-06-09 09:06:32.533333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.533340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.533349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.533355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.533364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.533372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.533381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.533388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.533397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.533408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.533416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c01c50 is same with the state(5) to be set 00:28:10.121 [2024-06-09 09:06:32.534683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.534983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.534991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.121 [2024-06-09 09:06:32.535261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.121 [2024-06-09 09:06:32.535268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.535745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.535753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4d90 is same with the state(5) to be set 00:28:10.122 [2024-06-09 09:06:32.539182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.122 [2024-06-09 09:06:32.539375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.122 [2024-06-09 09:06:32.539384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.123 [2024-06-09 09:06:32.539944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.123 [2024-06-09 09:06:32.539951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.539959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.539966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.539975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.539983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.539992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.539999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.124 [2024-06-09 09:06:32.540255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.124 [2024-06-09 09:06:32.540264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b781e0 is same with the state(5) to be set 00:28:10.124 [2024-06-09 09:06:32.541784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.124 [2024-06-09 09:06:32.541803] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.124 [2024-06-09 09:06:32.541815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.124 [2024-06-09 09:06:32.541826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:10.124 [2024-06-09 09:06:32.541836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:10.124 [2024-06-09 09:06:32.541870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:10.124 [2024-06-09 09:06:32.541877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:10.124 [2024-06-09 09:06:32.541885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:10.124 [2024-06-09 09:06:32.541935] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.124 [2024-06-09 09:06:32.541948] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.124 [2024-06-09 09:06:32.541958] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.124 [2024-06-09 09:06:32.541973] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.124 [2024-06-09 09:06:32.542041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:10.124 [2024-06-09 09:06:32.542051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:10.124 task offset: 29056 on job bdev=Nvme8n1 fails 00:28:10.124 00:28:10.124 Latency(us) 00:28:10.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.124 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme1n1 ended in about 0.95 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme1n1 : 0.95 134.24 8.39 67.12 0.00 314382.22 23811.41 279620.27 00:28:10.124 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme2n1 ended in about 0.95 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme2n1 : 0.95 135.38 8.46 67.69 0.00 305208.04 23702.19 346030.08 00:28:10.124 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme3n1 ended in about 0.96 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme3n1 : 0.96 133.91 8.37 66.96 0.00 302149.12 23156.05 298844.16 00:28:10.124 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme4n1 ended in about 0.96 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme4n1 : 0.96 133.59 8.35 66.79 0.00 296506.31 24466.77 277872.64 00:28:10.124 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme5n1 ended in about 0.96 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme5n1 : 0.96 199.90 12.49 66.63 0.00 218030.72 21845.33 221948.59 00:28:10.124 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme6n1 ended in about 0.94 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme6n1 : 0.94 135.74 8.48 67.87 0.00 278498.13 20862.29 300591.79 00:28:10.124 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme7n1 ended in about 0.95 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme7n1 : 0.95 270.40 16.90 67.60 0.00 163884.37 6990.51 203598.51 00:28:10.124 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme8n1 ended in about 0.94 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme8n1 : 0.94 204.24 12.77 68.08 0.00 198430.93 20971.52 279620.27 00:28:10.124 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme9n1 ended in about 0.96 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme9n1 : 0.96 265.88 16.62 66.47 0.00 159317.42 10704.21 177384.11 00:28:10.124 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.124 Job: Nvme10n1 ended in about 0.97 seconds with error 00:28:10.124 Verification LBA range: start 0x0 length 0x400 00:28:10.124 Nvme10n1 : 0.97 66.16 4.14 66.16 0.00 391653.55 37792.43 389720.75 00:28:10.124 =================================================================================================================== 00:28:10.124 Total : 1679.45 104.97 671.38 0.00 244439.83 6990.51 389720.75 00:28:10.124 [2024-06-09 09:06:32.567235] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:10.124 [2024-06-09 09:06:32.567278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:10.124 [2024-06-09 09:06:32.567295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.124 [2024-06-09 09:06:32.567792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.124 [2024-06-09 09:06:32.567810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab9140 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.567821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab9140 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.568300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.568310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac7790 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.568317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7790 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.568680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.568690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c83930 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.568697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83930 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.570290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:10.125 [2024-06-09 09:06:32.570775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.570788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae7f40 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.570795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7f40 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.571278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.571288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5dcd0 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.571294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5dcd0 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.571748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.571758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6e8d0 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.571765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e8d0 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.571777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab9140 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.571788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7790 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.571803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83930 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.571831] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.125 [2024-06-09 09:06:32.571843] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.125 [2024-06-09 09:06:32.571863] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.125 [2024-06-09 09:06:32.571876] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.125 [2024-06-09 09:06:32.571886] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.125 [2024-06-09 09:06:32.571947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:10.125 [2024-06-09 09:06:32.571957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:10.125 [2024-06-09 09:06:32.572444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.572456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5deb0 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.572463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5deb0 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.572472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae7f40 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.572481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5dcd0 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.572490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e8d0 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.572498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.572504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.572512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.125 [2024-06-09 09:06:32.572523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.572529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.572536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:10.125 [2024-06-09 09:06:32.572545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.572552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.572558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:10.125 [2024-06-09 09:06:32.572840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:10.125 [2024-06-09 09:06:32.572852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.572859] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.572865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.573373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.573385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c835a0 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.573393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c835a0 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.573861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.573871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be610 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.573877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be610 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.573887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5deb0 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.573895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.573901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.573907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:10.125 [2024-06-09 09:06:32.573917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.573923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.573930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:10.125 [2024-06-09 09:06:32.573939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.573945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.573952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:10.125 [2024-06-09 09:06:32.573982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.573988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.573994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.574333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.125 [2024-06-09 09:06:32.574342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c82a00 with addr=10.0.0.2, port=4420 00:28:10.125 [2024-06-09 09:06:32.574349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c82a00 is same with the state(5) to be set 00:28:10.125 [2024-06-09 09:06:32.574358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c835a0 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.574366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be610 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.574374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.574380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.574387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:10.125 [2024-06-09 09:06:32.574421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.574429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c82a00 (9): Bad file descriptor 00:28:10.125 [2024-06-09 09:06:32.574438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.574444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.574450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:10.125 [2024-06-09 09:06:32.574460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.574469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.574476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:10.125 [2024-06-09 09:06:32.574511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.574519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.125 [2024-06-09 09:06:32.574525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:10.125 [2024-06-09 09:06:32.574532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:10.125 [2024-06-09 09:06:32.574538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:10.125 [2024-06-09 09:06:32.574565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.387 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:10.387 09:06:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2719274 00:28:11.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2719274) - No such process 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.331 rmmod nvme_tcp 00:28:11.331 rmmod nvme_fabrics 00:28:11.331 rmmod nvme_keyring 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.331 09:06:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.880 09:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.880 00:28:13.880 real 0m7.520s 00:28:13.880 user 0m17.661s 00:28:13.880 sys 0m1.238s 00:28:13.880 09:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:13.880 09:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.880 ************************************ 00:28:13.880 END TEST nvmf_shutdown_tc3 00:28:13.880 ************************************ 00:28:13.880 09:06:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:13.880 00:28:13.880 real 0m32.224s 00:28:13.880 user 1m14.283s 00:28:13.880 sys 0m9.577s 00:28:13.880 09:06:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:13.880 09:06:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:13.880 ************************************ 00:28:13.880 END TEST nvmf_shutdown 00:28:13.880 ************************************ 00:28:13.880 09:06:35 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:28:13.881 09:06:35 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:13.881 09:06:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.881 09:06:36 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:28:13.881 09:06:36 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:13.881 09:06:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.881 09:06:36 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:28:13.881 09:06:36 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:13.881 09:06:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:13.881 09:06:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:13.881 09:06:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.881 ************************************ 00:28:13.881 START TEST nvmf_multicontroller 00:28:13.881 ************************************ 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:13.881 * Looking for test storage... 00:28:13.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.881 09:06:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:20.474 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:20.474 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:20.474 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:20.474 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.474 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.475 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.475 09:06:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:28:20.735 00:28:20.735 --- 10.0.0.2 ping statistics --- 00:28:20.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.735 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:28:20.735 00:28:20.735 --- 10.0.0.1 ping statistics --- 00:28:20.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.735 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2724029 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2724029 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 2724029 ']' 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:20.735 09:06:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.995 [2024-06-09 09:06:43.326756] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:20.995 [2024-06-09 09:06:43.326807] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.995 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.995 [2024-06-09 09:06:43.406634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:20.995 [2024-06-09 09:06:43.470768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.995 [2024-06-09 09:06:43.470805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.995 [2024-06-09 09:06:43.470813] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.995 [2024-06-09 09:06:43.470819] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.995 [2024-06-09 09:06:43.470825] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.995 [2024-06-09 09:06:43.470930] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.995 [2024-06-09 09:06:43.471086] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.995 [2024-06-09 09:06:43.471087] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 [2024-06-09 09:06:44.182611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 Malloc0 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 [2024-06-09 09:06:44.248808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 [2024-06-09 09:06:44.260742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 Malloc1 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2724358 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2724358 /var/tmp/bdevperf.sock 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 2724358 ']' 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:21.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:21.939 09:06:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.881 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:22.881 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:28:22.881 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:22.881 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.881 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.881 NVMe0n1 00:28:22.881 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:22.881 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.881 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:22.882 1 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.882 request: 00:28:22.882 { 00:28:22.882 "name": "NVMe0", 00:28:22.882 "trtype": "tcp", 00:28:22.882 "traddr": "10.0.0.2", 00:28:22.882 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:22.882 "hostaddr": "10.0.0.2", 00:28:22.882 "hostsvcid": "60000", 00:28:22.882 "adrfam": "ipv4", 00:28:22.882 "trsvcid": "4420", 00:28:22.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.882 "method": "bdev_nvme_attach_controller", 00:28:22.882 "req_id": 1 00:28:22.882 } 00:28:22.882 Got JSON-RPC error response 00:28:22.882 response: 00:28:22.882 { 00:28:22.882 "code": -114, 00:28:22.882 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.882 } 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.882 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.143 request: 00:28:23.143 { 00:28:23.143 "name": "NVMe0", 00:28:23.143 "trtype": "tcp", 00:28:23.143 "traddr": "10.0.0.2", 00:28:23.143 "hostaddr": "10.0.0.2", 00:28:23.143 "hostsvcid": "60000", 00:28:23.143 "adrfam": "ipv4", 00:28:23.143 "trsvcid": "4420", 00:28:23.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:23.143 "method": "bdev_nvme_attach_controller", 00:28:23.143 "req_id": 1 00:28:23.143 } 00:28:23.143 Got JSON-RPC error response 00:28:23.143 response: 00:28:23.144 { 00:28:23.144 "code": -114, 00:28:23.144 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:23.144 } 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.144 request: 00:28:23.144 { 00:28:23.144 "name": "NVMe0", 00:28:23.144 "trtype": "tcp", 00:28:23.144 "traddr": "10.0.0.2", 00:28:23.144 "hostaddr": "10.0.0.2", 00:28:23.144 "hostsvcid": "60000", 00:28:23.144 "adrfam": "ipv4", 00:28:23.144 "trsvcid": "4420", 00:28:23.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.144 "multipath": "disable", 00:28:23.144 "method": "bdev_nvme_attach_controller", 00:28:23.144 "req_id": 1 00:28:23.144 } 00:28:23.144 Got JSON-RPC error response 00:28:23.144 response: 00:28:23.144 { 00:28:23.144 "code": -114, 00:28:23.144 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:23.144 } 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.144 request: 00:28:23.144 { 00:28:23.144 "name": "NVMe0", 00:28:23.144 "trtype": "tcp", 00:28:23.144 "traddr": "10.0.0.2", 00:28:23.144 "hostaddr": "10.0.0.2", 00:28:23.144 "hostsvcid": "60000", 00:28:23.144 "adrfam": "ipv4", 00:28:23.144 "trsvcid": "4420", 00:28:23.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.144 "multipath": "failover", 00:28:23.144 "method": "bdev_nvme_attach_controller", 00:28:23.144 "req_id": 1 00:28:23.144 } 00:28:23.144 Got JSON-RPC error response 00:28:23.144 response: 00:28:23.144 { 00:28:23.144 "code": -114, 00:28:23.144 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:23.144 } 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.144 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.144 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:23.144 09:06:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:24.529 0 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2724358 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 2724358 ']' 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 2724358 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2724358 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2724358' 00:28:24.529 killing process with pid 2724358 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 2724358 00:28:24.529 09:06:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 2724358 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:28:24.529 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.529 [2024-06-09 09:06:44.366331] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:24.529 [2024-06-09 09:06:44.366385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724358 ] 00:28:24.529 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.529 [2024-06-09 09:06:44.424968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.529 [2024-06-09 09:06:44.489245] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.529 [2024-06-09 09:06:45.679068] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 6650a0c5-93d2-45d7-abe1-f96ea8f14a99 already exists 00:28:24.529 [2024-06-09 09:06:45.679097] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:6650a0c5-93d2-45d7-abe1-f96ea8f14a99 alias for bdev NVMe1n1 00:28:24.529 [2024-06-09 09:06:45.679108] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:24.529 Running I/O for 1 seconds... 00:28:24.529 00:28:24.529 Latency(us) 00:28:24.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.529 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:24.529 NVMe0n1 : 1.00 29465.98 115.10 0.00 0.00 4335.03 2143.57 7482.03 00:28:24.529 =================================================================================================================== 00:28:24.529 Total : 29465.98 115.10 0.00 0.00 4335.03 2143.57 7482.03 00:28:24.529 Received shutdown signal, test time was about 1.000000 seconds 00:28:24.529 00:28:24.529 Latency(us) 00:28:24.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.529 =================================================================================================================== 00:28:24.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.529 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.529 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.529 rmmod nvme_tcp 00:28:24.529 rmmod nvme_fabrics 00:28:24.790 rmmod nvme_keyring 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2724029 ']' 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2724029 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 2724029 ']' 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 2724029 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2724029 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2724029' 00:28:24.791 killing process with pid 2724029 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 2724029 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 2724029 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.791 09:06:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.343 09:06:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.343 00:28:27.343 real 0m13.325s 00:28:27.343 user 0m16.502s 00:28:27.343 sys 0m6.003s 00:28:27.343 09:06:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:27.343 09:06:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.343 ************************************ 00:28:27.343 END TEST nvmf_multicontroller 00:28:27.343 ************************************ 00:28:27.343 09:06:49 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:27.343 09:06:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:27.343 09:06:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:27.343 09:06:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:27.343 ************************************ 00:28:27.343 START TEST nvmf_aer 00:28:27.343 ************************************ 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:27.343 * Looking for test storage... 00:28:27.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.343 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:27.344 09:06:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:33.937 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:33.937 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:33.937 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.937 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:33.938 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.938 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:28:34.199 00:28:34.199 --- 10.0.0.2 ping statistics --- 00:28:34.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.199 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:28:34.199 00:28:34.199 --- 10.0.0.1 ping statistics --- 00:28:34.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.199 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2729030 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2729030 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 2729030 ']' 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:34.199 09:06:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.199 [2024-06-09 09:06:56.601861] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:34.199 [2024-06-09 09:06:56.601920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.199 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.199 [2024-06-09 09:06:56.668686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.199 [2024-06-09 09:06:56.735289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.199 [2024-06-09 09:06:56.735322] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.199 [2024-06-09 09:06:56.735329] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.199 [2024-06-09 09:06:56.735336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.199 [2024-06-09 09:06:56.735341] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.199 [2024-06-09 09:06:56.735437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.199 [2024-06-09 09:06:56.735658] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.199 [2024-06-09 09:06:56.735659] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.199 [2024-06-09 09:06:56.735511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.175 [2024-06-09 09:06:57.435025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.175 Malloc0 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.175 [2024-06-09 09:06:57.494396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.175 [ 00:28:35.175 { 00:28:35.175 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:35.175 "subtype": "Discovery", 00:28:35.175 "listen_addresses": [], 00:28:35.175 "allow_any_host": true, 00:28:35.175 "hosts": [] 00:28:35.175 }, 00:28:35.175 { 00:28:35.175 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.175 "subtype": "NVMe", 00:28:35.175 "listen_addresses": [ 00:28:35.175 { 00:28:35.175 "trtype": "TCP", 00:28:35.175 "adrfam": "IPv4", 00:28:35.175 "traddr": "10.0.0.2", 00:28:35.175 "trsvcid": "4420" 00:28:35.175 } 00:28:35.175 ], 00:28:35.175 "allow_any_host": true, 00:28:35.175 "hosts": [], 00:28:35.175 "serial_number": "SPDK00000000000001", 00:28:35.175 "model_number": "SPDK bdev Controller", 00:28:35.175 "max_namespaces": 2, 00:28:35.175 "min_cntlid": 1, 00:28:35.175 "max_cntlid": 65519, 00:28:35.175 "namespaces": [ 00:28:35.175 { 00:28:35.175 "nsid": 1, 00:28:35.175 "bdev_name": "Malloc0", 00:28:35.175 "name": "Malloc0", 00:28:35.175 "nguid": "5202759F339540DA88597516113D56C6", 00:28:35.175 "uuid": "5202759f-3395-40da-8859-7516113d56c6" 00:28:35.175 } 00:28:35.175 ] 00:28:35.175 } 00:28:35.175 ] 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2729086 00:28:35.175 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:28:35.176 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 2 -lt 200 ']' 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=3 00:28:35.176 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.437 Malloc1 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.437 Asynchronous Event Request test 00:28:35.437 Attaching to 10.0.0.2 00:28:35.437 Attached to 10.0.0.2 00:28:35.437 Registering asynchronous event callbacks... 00:28:35.437 Starting namespace attribute notice tests for all controllers... 00:28:35.437 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:35.437 aer_cb - Changed Namespace 00:28:35.437 Cleaning up... 00:28:35.437 [ 00:28:35.437 { 00:28:35.437 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:35.437 "subtype": "Discovery", 00:28:35.437 "listen_addresses": [], 00:28:35.437 "allow_any_host": true, 00:28:35.437 "hosts": [] 00:28:35.437 }, 00:28:35.437 { 00:28:35.437 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.437 "subtype": "NVMe", 00:28:35.437 "listen_addresses": [ 00:28:35.437 { 00:28:35.437 "trtype": "TCP", 00:28:35.437 "adrfam": "IPv4", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "trsvcid": "4420" 00:28:35.437 } 00:28:35.437 ], 00:28:35.437 "allow_any_host": true, 00:28:35.437 "hosts": [], 00:28:35.437 "serial_number": "SPDK00000000000001", 00:28:35.437 "model_number": "SPDK bdev Controller", 00:28:35.437 "max_namespaces": 2, 00:28:35.437 "min_cntlid": 1, 00:28:35.437 "max_cntlid": 65519, 00:28:35.437 "namespaces": [ 00:28:35.437 { 00:28:35.437 "nsid": 1, 00:28:35.437 "bdev_name": "Malloc0", 00:28:35.437 "name": "Malloc0", 00:28:35.437 "nguid": "5202759F339540DA88597516113D56C6", 00:28:35.437 "uuid": "5202759f-3395-40da-8859-7516113d56c6" 00:28:35.437 }, 00:28:35.437 { 00:28:35.437 "nsid": 2, 00:28:35.437 "bdev_name": "Malloc1", 00:28:35.437 "name": "Malloc1", 00:28:35.437 "nguid": "F5F137EA494D4B9C8EA2CEC1C0A25CC7", 00:28:35.437 "uuid": "f5f137ea-494d-4b9c-8ea2-cec1c0a25cc7" 00:28:35.437 } 00:28:35.437 ] 00:28:35.437 } 00:28:35.437 ] 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2729086 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:35.437 09:06:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:35.437 rmmod nvme_tcp 00:28:35.437 rmmod nvme_fabrics 00:28:35.699 rmmod nvme_keyring 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2729030 ']' 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2729030 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 2729030 ']' 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 2729030 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2729030 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2729030' 00:28:35.699 killing process with pid 2729030 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 2729030 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 2729030 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.699 09:06:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.254 09:07:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:38.254 00:28:38.254 real 0m10.827s 00:28:38.254 user 0m7.968s 00:28:38.254 sys 0m5.541s 00:28:38.254 09:07:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:38.254 09:07:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:38.254 ************************************ 00:28:38.254 END TEST nvmf_aer 00:28:38.254 ************************************ 00:28:38.254 09:07:00 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:38.254 09:07:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:38.254 09:07:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:38.254 09:07:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:38.254 ************************************ 00:28:38.254 START TEST nvmf_async_init 00:28:38.254 ************************************ 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:38.254 * Looking for test storage... 00:28:38.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.254 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.255 09:07:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=91a909ea55a74959b0d27f7762d2bfae 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:38.256 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:38.257 09:07:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:38.257 09:07:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.858 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:44.859 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:44.859 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:44.859 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:44.859 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:44.859 09:07:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:44.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:28:44.859 00:28:44.859 --- 10.0.0.2 ping statistics --- 00:28:44.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.859 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:28:44.859 00:28:44.859 --- 10.0.0.1 ping statistics --- 00:28:44.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.859 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2733376 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2733376 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 2733376 ']' 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:44.859 09:07:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:44.859 [2024-06-09 09:07:07.400094] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:44.859 [2024-06-09 09:07:07.400158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.121 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.121 [2024-06-09 09:07:07.469339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.121 [2024-06-09 09:07:07.542968] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.121 [2024-06-09 09:07:07.543006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.121 [2024-06-09 09:07:07.543014] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.121 [2024-06-09 09:07:07.543020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.121 [2024-06-09 09:07:07.543026] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.121 [2024-06-09 09:07:07.543045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 [2024-06-09 09:07:08.201578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 null0 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 91a909ea55a74959b0d27f7762d2bfae 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 [2024-06-09 09:07:08.245804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.693 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.954 nvme0n1 00:28:45.954 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.954 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:45.954 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.954 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.954 [ 00:28:45.954 { 00:28:45.954 "name": "nvme0n1", 00:28:45.954 "aliases": [ 00:28:45.954 "91a909ea-55a7-4959-b0d2-7f7762d2bfae" 00:28:45.954 ], 00:28:45.954 "product_name": "NVMe disk", 00:28:45.955 "block_size": 512, 00:28:45.955 "num_blocks": 2097152, 00:28:45.955 "uuid": "91a909ea-55a7-4959-b0d2-7f7762d2bfae", 00:28:45.955 "assigned_rate_limits": { 00:28:45.955 "rw_ios_per_sec": 0, 00:28:45.955 "rw_mbytes_per_sec": 0, 00:28:45.955 "r_mbytes_per_sec": 0, 00:28:45.955 "w_mbytes_per_sec": 0 00:28:45.955 }, 00:28:45.955 "claimed": false, 00:28:45.955 "zoned": false, 00:28:45.955 "supported_io_types": { 00:28:45.955 "read": true, 00:28:45.955 "write": true, 00:28:45.955 "unmap": false, 00:28:45.955 "write_zeroes": true, 00:28:45.955 "flush": true, 00:28:45.955 "reset": true, 00:28:45.955 "compare": true, 00:28:45.955 "compare_and_write": true, 00:28:45.955 "abort": true, 00:28:45.955 "nvme_admin": true, 00:28:45.955 "nvme_io": true 00:28:45.955 }, 00:28:45.955 "memory_domains": [ 00:28:45.955 { 00:28:45.955 "dma_device_id": "system", 00:28:45.955 "dma_device_type": 1 00:28:45.955 } 00:28:45.955 ], 00:28:45.955 "driver_specific": { 00:28:45.955 "nvme": [ 00:28:45.955 { 00:28:45.955 "trid": { 00:28:45.955 "trtype": "TCP", 00:28:45.955 "adrfam": "IPv4", 00:28:45.955 "traddr": "10.0.0.2", 00:28:45.955 "trsvcid": "4420", 00:28:45.955 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:45.955 }, 00:28:45.955 "ctrlr_data": { 00:28:45.955 "cntlid": 1, 00:28:45.955 "vendor_id": "0x8086", 00:28:45.955 "model_number": "SPDK bdev Controller", 00:28:45.955 "serial_number": "00000000000000000000", 00:28:45.955 "firmware_revision": "24.09", 00:28:45.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:45.955 "oacs": { 00:28:45.955 "security": 0, 00:28:45.955 "format": 0, 00:28:45.955 "firmware": 0, 00:28:45.955 "ns_manage": 0 00:28:45.955 }, 00:28:45.955 "multi_ctrlr": true, 00:28:45.955 "ana_reporting": false 00:28:45.955 }, 00:28:45.955 "vs": { 00:28:45.955 "nvme_version": "1.3" 00:28:45.955 }, 00:28:45.955 "ns_data": { 00:28:45.955 "id": 1, 00:28:45.955 "can_share": true 00:28:45.955 } 00:28:45.955 } 00:28:45.955 ], 00:28:45.955 "mp_policy": "active_passive" 00:28:45.955 } 00:28:45.955 } 00:28:45.955 ] 00:28:45.955 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.955 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:45.955 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.955 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.955 [2024-06-09 09:07:08.502631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:45.955 [2024-06-09 09:07:08.502695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a7210 (9): Bad file descriptor 00:28:46.216 [2024-06-09 09:07:08.644497] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.216 [ 00:28:46.216 { 00:28:46.216 "name": "nvme0n1", 00:28:46.216 "aliases": [ 00:28:46.216 "91a909ea-55a7-4959-b0d2-7f7762d2bfae" 00:28:46.216 ], 00:28:46.216 "product_name": "NVMe disk", 00:28:46.216 "block_size": 512, 00:28:46.216 "num_blocks": 2097152, 00:28:46.216 "uuid": "91a909ea-55a7-4959-b0d2-7f7762d2bfae", 00:28:46.216 "assigned_rate_limits": { 00:28:46.216 "rw_ios_per_sec": 0, 00:28:46.216 "rw_mbytes_per_sec": 0, 00:28:46.216 "r_mbytes_per_sec": 0, 00:28:46.216 "w_mbytes_per_sec": 0 00:28:46.216 }, 00:28:46.216 "claimed": false, 00:28:46.216 "zoned": false, 00:28:46.216 "supported_io_types": { 00:28:46.216 "read": true, 00:28:46.216 "write": true, 00:28:46.216 "unmap": false, 00:28:46.216 "write_zeroes": true, 00:28:46.216 "flush": true, 00:28:46.216 "reset": true, 00:28:46.216 "compare": true, 00:28:46.216 "compare_and_write": true, 00:28:46.216 "abort": true, 00:28:46.216 "nvme_admin": true, 00:28:46.216 "nvme_io": true 00:28:46.216 }, 00:28:46.216 "memory_domains": [ 00:28:46.216 { 00:28:46.216 "dma_device_id": "system", 00:28:46.216 "dma_device_type": 1 00:28:46.216 } 00:28:46.216 ], 00:28:46.216 "driver_specific": { 00:28:46.216 "nvme": [ 00:28:46.216 { 00:28:46.216 "trid": { 00:28:46.216 "trtype": "TCP", 00:28:46.216 "adrfam": "IPv4", 00:28:46.216 "traddr": "10.0.0.2", 00:28:46.216 "trsvcid": "4420", 00:28:46.216 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:46.216 }, 00:28:46.216 "ctrlr_data": { 00:28:46.216 "cntlid": 2, 00:28:46.216 "vendor_id": "0x8086", 00:28:46.216 "model_number": "SPDK bdev Controller", 00:28:46.216 "serial_number": "00000000000000000000", 00:28:46.216 "firmware_revision": "24.09", 00:28:46.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.216 "oacs": { 00:28:46.216 "security": 0, 00:28:46.216 "format": 0, 00:28:46.216 "firmware": 0, 00:28:46.216 "ns_manage": 0 00:28:46.216 }, 00:28:46.216 "multi_ctrlr": true, 00:28:46.216 "ana_reporting": false 00:28:46.216 }, 00:28:46.216 "vs": { 00:28:46.216 "nvme_version": "1.3" 00:28:46.216 }, 00:28:46.216 "ns_data": { 00:28:46.216 "id": 1, 00:28:46.216 "can_share": true 00:28:46.216 } 00:28:46.216 } 00:28:46.216 ], 00:28:46.216 "mp_policy": "active_passive" 00:28:46.216 } 00:28:46.216 } 00:28:46.216 ] 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.qyTo3GzsG8 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.qyTo3GzsG8 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.216 [2024-06-09 09:07:08.699224] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:46.216 [2024-06-09 09:07:08.699352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qyTo3GzsG8 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.216 [2024-06-09 09:07:08.707237] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:46.216 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.217 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qyTo3GzsG8 00:28:46.217 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.217 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.217 [2024-06-09 09:07:08.715264] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:46.217 [2024-06-09 09:07:08.715301] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:46.478 nvme0n1 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.478 [ 00:28:46.478 { 00:28:46.478 "name": "nvme0n1", 00:28:46.478 "aliases": [ 00:28:46.478 "91a909ea-55a7-4959-b0d2-7f7762d2bfae" 00:28:46.478 ], 00:28:46.478 "product_name": "NVMe disk", 00:28:46.478 "block_size": 512, 00:28:46.478 "num_blocks": 2097152, 00:28:46.478 "uuid": "91a909ea-55a7-4959-b0d2-7f7762d2bfae", 00:28:46.478 "assigned_rate_limits": { 00:28:46.478 "rw_ios_per_sec": 0, 00:28:46.478 "rw_mbytes_per_sec": 0, 00:28:46.478 "r_mbytes_per_sec": 0, 00:28:46.478 "w_mbytes_per_sec": 0 00:28:46.478 }, 00:28:46.478 "claimed": false, 00:28:46.478 "zoned": false, 00:28:46.478 "supported_io_types": { 00:28:46.478 "read": true, 00:28:46.478 "write": true, 00:28:46.478 "unmap": false, 00:28:46.478 "write_zeroes": true, 00:28:46.478 "flush": true, 00:28:46.478 "reset": true, 00:28:46.478 "compare": true, 00:28:46.478 "compare_and_write": true, 00:28:46.478 "abort": true, 00:28:46.478 "nvme_admin": true, 00:28:46.478 "nvme_io": true 00:28:46.478 }, 00:28:46.478 "memory_domains": [ 00:28:46.478 { 00:28:46.478 "dma_device_id": "system", 00:28:46.478 "dma_device_type": 1 00:28:46.478 } 00:28:46.478 ], 00:28:46.478 "driver_specific": { 00:28:46.478 "nvme": [ 00:28:46.478 { 00:28:46.478 "trid": { 00:28:46.478 "trtype": "TCP", 00:28:46.478 "adrfam": "IPv4", 00:28:46.478 "traddr": "10.0.0.2", 00:28:46.478 "trsvcid": "4421", 00:28:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:46.478 }, 00:28:46.478 "ctrlr_data": { 00:28:46.478 "cntlid": 3, 00:28:46.478 "vendor_id": "0x8086", 00:28:46.478 "model_number": "SPDK bdev Controller", 00:28:46.478 "serial_number": "00000000000000000000", 00:28:46.478 "firmware_revision": "24.09", 00:28:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.478 "oacs": { 00:28:46.478 "security": 0, 00:28:46.478 "format": 0, 00:28:46.478 "firmware": 0, 00:28:46.478 "ns_manage": 0 00:28:46.478 }, 00:28:46.478 "multi_ctrlr": true, 00:28:46.478 "ana_reporting": false 00:28:46.478 }, 00:28:46.478 "vs": { 00:28:46.478 "nvme_version": "1.3" 00:28:46.478 }, 00:28:46.478 "ns_data": { 00:28:46.478 "id": 1, 00:28:46.478 "can_share": true 00:28:46.478 } 00:28:46.478 } 00:28:46.478 ], 00:28:46.478 "mp_policy": "active_passive" 00:28:46.478 } 00:28:46.478 } 00:28:46.478 ] 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.qyTo3GzsG8 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:46.478 rmmod nvme_tcp 00:28:46.478 rmmod nvme_fabrics 00:28:46.478 rmmod nvme_keyring 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2733376 ']' 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2733376 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 2733376 ']' 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 2733376 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2733376 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2733376' 00:28:46.478 killing process with pid 2733376 00:28:46.478 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 2733376 00:28:46.479 [2024-06-09 09:07:08.945043] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:46.479 [2024-06-09 09:07:08.945070] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:46.479 09:07:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 2733376 00:28:46.740 09:07:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:46.740 09:07:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:46.740 09:07:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:46.740 09:07:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:46.740 09:07:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:46.740 09:07:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.740 09:07:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.740 09:07:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.656 09:07:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:48.656 00:28:48.656 real 0m10.755s 00:28:48.656 user 0m3.757s 00:28:48.656 sys 0m5.379s 00:28:48.656 09:07:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:48.656 09:07:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:48.656 ************************************ 00:28:48.656 END TEST nvmf_async_init 00:28:48.656 ************************************ 00:28:48.656 09:07:11 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:48.656 09:07:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:48.656 09:07:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:48.656 09:07:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:48.656 ************************************ 00:28:48.656 START TEST dma 00:28:48.656 ************************************ 00:28:48.656 09:07:11 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:48.917 * Looking for test storage... 00:28:48.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.917 09:07:11 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.917 09:07:11 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.917 09:07:11 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.917 09:07:11 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.917 09:07:11 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.917 09:07:11 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.917 09:07:11 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.917 09:07:11 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:48.917 09:07:11 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.917 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.918 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:48.918 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:48.918 09:07:11 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:48.918 09:07:11 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:48.918 09:07:11 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:48.918 00:28:48.918 real 0m0.125s 00:28:48.918 user 0m0.051s 00:28:48.918 sys 0m0.082s 00:28:48.918 09:07:11 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:48.918 09:07:11 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:48.918 ************************************ 00:28:48.918 END TEST dma 00:28:48.918 ************************************ 00:28:48.918 09:07:11 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:48.918 09:07:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:48.918 09:07:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:48.918 09:07:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:48.918 ************************************ 00:28:48.918 START TEST nvmf_identify 00:28:48.918 ************************************ 00:28:48.918 09:07:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:49.180 * Looking for test storage... 00:28:49.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.180 09:07:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.181 09:07:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:55.776 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:55.776 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:55.776 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:55.776 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.038 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.038 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.038 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.038 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:56.038 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:56.038 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.038 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:56.038 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:56.038 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.039 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:56.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:28:56.301 00:28:56.301 --- 10.0.0.2 ping statistics --- 00:28:56.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.301 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:28:56.301 00:28:56.301 --- 10.0.0.1 ping statistics --- 00:28:56.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.301 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2737770 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2737770 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 2737770 ']' 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:56.301 09:07:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:56.301 [2024-06-09 09:07:18.759092] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:56.301 [2024-06-09 09:07:18.759157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.301 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.301 [2024-06-09 09:07:18.829388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.563 [2024-06-09 09:07:18.905873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.563 [2024-06-09 09:07:18.905911] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.563 [2024-06-09 09:07:18.905919] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.563 [2024-06-09 09:07:18.905925] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.563 [2024-06-09 09:07:18.905931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.563 [2024-06-09 09:07:18.906077] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.563 [2024-06-09 09:07:18.906188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.563 [2024-06-09 09:07:18.906348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.563 [2024-06-09 09:07:18.906350] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.165 [2024-06-09 09:07:19.543836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.165 Malloc0 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.165 [2024-06-09 09:07:19.643219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.165 [ 00:28:57.165 { 00:28:57.165 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:57.165 "subtype": "Discovery", 00:28:57.165 "listen_addresses": [ 00:28:57.165 { 00:28:57.165 "trtype": "TCP", 00:28:57.165 "adrfam": "IPv4", 00:28:57.165 "traddr": "10.0.0.2", 00:28:57.165 "trsvcid": "4420" 00:28:57.165 } 00:28:57.165 ], 00:28:57.165 "allow_any_host": true, 00:28:57.165 "hosts": [] 00:28:57.165 }, 00:28:57.165 { 00:28:57.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.165 "subtype": "NVMe", 00:28:57.165 "listen_addresses": [ 00:28:57.165 { 00:28:57.165 "trtype": "TCP", 00:28:57.165 "adrfam": "IPv4", 00:28:57.165 "traddr": "10.0.0.2", 00:28:57.165 "trsvcid": "4420" 00:28:57.165 } 00:28:57.165 ], 00:28:57.165 "allow_any_host": true, 00:28:57.165 "hosts": [], 00:28:57.165 "serial_number": "SPDK00000000000001", 00:28:57.165 "model_number": "SPDK bdev Controller", 00:28:57.165 "max_namespaces": 32, 00:28:57.165 "min_cntlid": 1, 00:28:57.165 "max_cntlid": 65519, 00:28:57.165 "namespaces": [ 00:28:57.165 { 00:28:57.165 "nsid": 1, 00:28:57.165 "bdev_name": "Malloc0", 00:28:57.165 "name": "Malloc0", 00:28:57.165 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:57.165 "eui64": "ABCDEF0123456789", 00:28:57.165 "uuid": "556b9a5c-d6d5-429f-ae88-0d3017f4f805" 00:28:57.165 } 00:28:57.165 ] 00:28:57.165 } 00:28:57.165 ] 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.165 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:57.165 [2024-06-09 09:07:19.703347] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:57.165 [2024-06-09 09:07:19.703386] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738100 ] 00:28:57.165 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.429 [2024-06-09 09:07:19.735064] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:57.429 [2024-06-09 09:07:19.735107] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:57.429 [2024-06-09 09:07:19.735112] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:57.429 [2024-06-09 09:07:19.735122] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:57.429 [2024-06-09 09:07:19.735130] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:57.429 [2024-06-09 09:07:19.738432] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:57.429 [2024-06-09 09:07:19.738463] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9bcec0 0 00:28:57.429 [2024-06-09 09:07:19.746408] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:57.429 [2024-06-09 09:07:19.746426] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:57.429 [2024-06-09 09:07:19.746432] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:57.429 [2024-06-09 09:07:19.746436] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:57.429 [2024-06-09 09:07:19.746473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.746478] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.746483] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.429 [2024-06-09 09:07:19.746495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:57.429 [2024-06-09 09:07:19.746513] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.429 [2024-06-09 09:07:19.753411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.429 [2024-06-09 09:07:19.753420] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.429 [2024-06-09 09:07:19.753424] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.753429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.429 [2024-06-09 09:07:19.753438] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:57.429 [2024-06-09 09:07:19.753445] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:57.429 [2024-06-09 09:07:19.753450] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:57.429 [2024-06-09 09:07:19.753461] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.753465] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.753469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.429 [2024-06-09 09:07:19.753476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.429 [2024-06-09 09:07:19.753488] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.429 [2024-06-09 09:07:19.753744] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.429 [2024-06-09 09:07:19.753752] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.429 [2024-06-09 09:07:19.753755] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.753759] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.429 [2024-06-09 09:07:19.753765] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:57.429 [2024-06-09 09:07:19.753773] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:57.429 [2024-06-09 09:07:19.753780] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.753784] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.753787] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.429 [2024-06-09 09:07:19.753794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.429 [2024-06-09 09:07:19.753805] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.429 [2024-06-09 09:07:19.754016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.429 [2024-06-09 09:07:19.754022] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.429 [2024-06-09 09:07:19.754025] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.754029] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.429 [2024-06-09 09:07:19.754035] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:57.429 [2024-06-09 09:07:19.754043] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:57.429 [2024-06-09 09:07:19.754049] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.754053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.429 [2024-06-09 09:07:19.754057] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.430 [2024-06-09 09:07:19.754064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.430 [2024-06-09 09:07:19.754074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.430 [2024-06-09 09:07:19.754287] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.430 [2024-06-09 09:07:19.754294] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.430 [2024-06-09 09:07:19.754297] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754301] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.430 [2024-06-09 09:07:19.754306] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:57.430 [2024-06-09 09:07:19.754315] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754319] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754322] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.430 [2024-06-09 09:07:19.754329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.430 [2024-06-09 09:07:19.754338] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.430 [2024-06-09 09:07:19.754553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.430 [2024-06-09 09:07:19.754560] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.430 [2024-06-09 09:07:19.754567] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754571] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.430 [2024-06-09 09:07:19.754576] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:57.430 [2024-06-09 09:07:19.754581] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:57.430 [2024-06-09 09:07:19.754588] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:57.430 [2024-06-09 09:07:19.754694] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:57.430 [2024-06-09 09:07:19.754698] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:57.430 [2024-06-09 09:07:19.754707] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754711] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.430 [2024-06-09 09:07:19.754721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.430 [2024-06-09 09:07:19.754732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.430 [2024-06-09 09:07:19.754942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.430 [2024-06-09 09:07:19.754949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.430 [2024-06-09 09:07:19.754952] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754956] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.430 [2024-06-09 09:07:19.754961] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:57.430 [2024-06-09 09:07:19.754969] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754973] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.754977] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.430 [2024-06-09 09:07:19.754984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.430 [2024-06-09 09:07:19.754994] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.430 [2024-06-09 09:07:19.755240] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.430 [2024-06-09 09:07:19.755246] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.430 [2024-06-09 09:07:19.755250] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.755253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.430 [2024-06-09 09:07:19.755258] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:57.430 [2024-06-09 09:07:19.755262] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:57.430 [2024-06-09 09:07:19.755270] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:57.430 [2024-06-09 09:07:19.755284] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:57.430 [2024-06-09 09:07:19.755293] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.755299] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.430 [2024-06-09 09:07:19.755306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.430 [2024-06-09 09:07:19.755317] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.430 [2024-06-09 09:07:19.755587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.430 [2024-06-09 09:07:19.755595] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.430 [2024-06-09 09:07:19.755598] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.755602] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcec0): datao=0, datal=4096, cccid=0 00:28:57.430 [2024-06-09 09:07:19.755607] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3fdf0) on tqpair(0x9bcec0): expected_datao=0, payload_size=4096 00:28:57.430 [2024-06-09 09:07:19.755611] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.755776] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.755780] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.796637] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.430 [2024-06-09 09:07:19.796649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.430 [2024-06-09 09:07:19.796652] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.796656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.430 [2024-06-09 09:07:19.796665] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:57.430 [2024-06-09 09:07:19.796670] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:57.430 [2024-06-09 09:07:19.796674] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:57.430 [2024-06-09 09:07:19.796679] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:57.430 [2024-06-09 09:07:19.796683] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:57.430 [2024-06-09 09:07:19.796688] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:57.430 [2024-06-09 09:07:19.796700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:57.430 [2024-06-09 09:07:19.796709] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.796713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.430 [2024-06-09 09:07:19.796716] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.430 [2024-06-09 09:07:19.796724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:57.431 [2024-06-09 09:07:19.796737] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.431 [2024-06-09 09:07:19.796915] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.431 [2024-06-09 09:07:19.796921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.431 [2024-06-09 09:07:19.796925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.796928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa3fdf0) on tqpair=0x9bcec0 00:28:57.431 [2024-06-09 09:07:19.796938] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.796942] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.796946] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcec0) 00:28:57.431 [2024-06-09 09:07:19.796955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.431 [2024-06-09 09:07:19.796961] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.796965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.796968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9bcec0) 00:28:57.431 [2024-06-09 09:07:19.796974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.431 [2024-06-09 09:07:19.796980] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.796983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.796987] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9bcec0) 00:28:57.431 [2024-06-09 09:07:19.796992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.431 [2024-06-09 09:07:19.796998] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797002] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797005] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcec0) 00:28:57.431 [2024-06-09 09:07:19.797011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.431 [2024-06-09 09:07:19.797016] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:57.431 [2024-06-09 09:07:19.797024] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:57.431 [2024-06-09 09:07:19.797030] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcec0) 00:28:57.431 [2024-06-09 09:07:19.797040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.431 [2024-06-09 09:07:19.797052] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3fdf0, cid 0, qid 0 00:28:57.431 [2024-06-09 09:07:19.797057] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3ff50, cid 1, qid 0 00:28:57.431 [2024-06-09 09:07:19.797062] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa400b0, cid 2, qid 0 00:28:57.431 [2024-06-09 09:07:19.797067] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40210, cid 3, qid 0 00:28:57.431 [2024-06-09 09:07:19.797071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40370, cid 4, qid 0 00:28:57.431 [2024-06-09 09:07:19.797340] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.431 [2024-06-09 09:07:19.797346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.431 [2024-06-09 09:07:19.797349] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797353] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40370) on tqpair=0x9bcec0 00:28:57.431 [2024-06-09 09:07:19.797358] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:57.431 [2024-06-09 09:07:19.797366] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:57.431 [2024-06-09 09:07:19.797377] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797381] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcec0) 00:28:57.431 [2024-06-09 09:07:19.797387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.431 [2024-06-09 09:07:19.797400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40370, cid 4, qid 0 00:28:57.431 [2024-06-09 09:07:19.797655] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.431 [2024-06-09 09:07:19.797662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.431 [2024-06-09 09:07:19.797665] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797669] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcec0): datao=0, datal=4096, cccid=4 00:28:57.431 [2024-06-09 09:07:19.797673] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa40370) on tqpair(0x9bcec0): expected_datao=0, payload_size=4096 00:28:57.431 [2024-06-09 09:07:19.797677] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797684] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797688] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797835] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.431 [2024-06-09 09:07:19.797842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.431 [2024-06-09 09:07:19.797845] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40370) on tqpair=0x9bcec0 00:28:57.431 [2024-06-09 09:07:19.797860] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:57.431 [2024-06-09 09:07:19.797886] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797891] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcec0) 00:28:57.431 [2024-06-09 09:07:19.797897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.431 [2024-06-09 09:07:19.797904] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797908] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.797912] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9bcec0) 00:28:57.431 [2024-06-09 09:07:19.797918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.431 [2024-06-09 09:07:19.797932] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40370, cid 4, qid 0 00:28:57.431 [2024-06-09 09:07:19.797937] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa404d0, cid 5, qid 0 00:28:57.431 [2024-06-09 09:07:19.798196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.431 [2024-06-09 09:07:19.798202] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.431 [2024-06-09 09:07:19.798206] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.798209] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcec0): datao=0, datal=1024, cccid=4 00:28:57.431 [2024-06-09 09:07:19.798213] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa40370) on tqpair(0x9bcec0): expected_datao=0, payload_size=1024 00:28:57.431 [2024-06-09 09:07:19.798217] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.798224] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.798228] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.431 [2024-06-09 09:07:19.798233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.431 [2024-06-09 09:07:19.798239] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.431 [2024-06-09 09:07:19.798242] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.798246] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa404d0) on tqpair=0x9bcec0 00:28:57.432 [2024-06-09 09:07:19.838632] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.432 [2024-06-09 09:07:19.838644] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.432 [2024-06-09 09:07:19.838653] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.838657] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40370) on tqpair=0x9bcec0 00:28:57.432 [2024-06-09 09:07:19.838672] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.838677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcec0) 00:28:57.432 [2024-06-09 09:07:19.838684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.432 [2024-06-09 09:07:19.838700] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40370, cid 4, qid 0 00:28:57.432 [2024-06-09 09:07:19.838954] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.432 [2024-06-09 09:07:19.838962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.432 [2024-06-09 09:07:19.838965] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.838969] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcec0): datao=0, datal=3072, cccid=4 00:28:57.432 [2024-06-09 09:07:19.838973] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa40370) on tqpair(0x9bcec0): expected_datao=0, payload_size=3072 00:28:57.432 [2024-06-09 09:07:19.838977] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.838984] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.838988] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.879601] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.432 [2024-06-09 09:07:19.879613] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.432 [2024-06-09 09:07:19.879617] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.879621] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40370) on tqpair=0x9bcec0 00:28:57.432 [2024-06-09 09:07:19.879631] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.879635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcec0) 00:28:57.432 [2024-06-09 09:07:19.879642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.432 [2024-06-09 09:07:19.879657] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40370, cid 4, qid 0 00:28:57.432 [2024-06-09 09:07:19.879849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.432 [2024-06-09 09:07:19.879856] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.432 [2024-06-09 09:07:19.879859] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.879862] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcec0): datao=0, datal=8, cccid=4 00:28:57.432 [2024-06-09 09:07:19.879867] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa40370) on tqpair(0x9bcec0): expected_datao=0, payload_size=8 00:28:57.432 [2024-06-09 09:07:19.879871] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.879877] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.879881] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.920645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.432 [2024-06-09 09:07:19.920658] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.432 [2024-06-09 09:07:19.920661] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.432 [2024-06-09 09:07:19.920665] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40370) on tqpair=0x9bcec0 00:28:57.432 ===================================================== 00:28:57.432 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:57.432 ===================================================== 00:28:57.432 Controller Capabilities/Features 00:28:57.432 ================================ 00:28:57.432 Vendor ID: 0000 00:28:57.432 Subsystem Vendor ID: 0000 00:28:57.432 Serial Number: .................... 00:28:57.432 Model Number: ........................................ 00:28:57.432 Firmware Version: 24.09 00:28:57.432 Recommended Arb Burst: 0 00:28:57.432 IEEE OUI Identifier: 00 00 00 00:28:57.432 Multi-path I/O 00:28:57.432 May have multiple subsystem ports: No 00:28:57.432 May have multiple controllers: No 00:28:57.432 Associated with SR-IOV VF: No 00:28:57.432 Max Data Transfer Size: 131072 00:28:57.432 Max Number of Namespaces: 0 00:28:57.432 Max Number of I/O Queues: 1024 00:28:57.432 NVMe Specification Version (VS): 1.3 00:28:57.432 NVMe Specification Version (Identify): 1.3 00:28:57.432 Maximum Queue Entries: 128 00:28:57.432 Contiguous Queues Required: Yes 00:28:57.432 Arbitration Mechanisms Supported 00:28:57.432 Weighted Round Robin: Not Supported 00:28:57.432 Vendor Specific: Not Supported 00:28:57.432 Reset Timeout: 15000 ms 00:28:57.432 Doorbell Stride: 4 bytes 00:28:57.432 NVM Subsystem Reset: Not Supported 00:28:57.432 Command Sets Supported 00:28:57.432 NVM Command Set: Supported 00:28:57.432 Boot Partition: Not Supported 00:28:57.432 Memory Page Size Minimum: 4096 bytes 00:28:57.432 Memory Page Size Maximum: 4096 bytes 00:28:57.432 Persistent Memory Region: Not Supported 00:28:57.432 Optional Asynchronous Events Supported 00:28:57.432 Namespace Attribute Notices: Not Supported 00:28:57.432 Firmware Activation Notices: Not Supported 00:28:57.432 ANA Change Notices: Not Supported 00:28:57.432 PLE Aggregate Log Change Notices: Not Supported 00:28:57.432 LBA Status Info Alert Notices: Not Supported 00:28:57.432 EGE Aggregate Log Change Notices: Not Supported 00:28:57.432 Normal NVM Subsystem Shutdown event: Not Supported 00:28:57.432 Zone Descriptor Change Notices: Not Supported 00:28:57.432 Discovery Log Change Notices: Supported 00:28:57.432 Controller Attributes 00:28:57.432 128-bit Host Identifier: Not Supported 00:28:57.432 Non-Operational Permissive Mode: Not Supported 00:28:57.432 NVM Sets: Not Supported 00:28:57.432 Read Recovery Levels: Not Supported 00:28:57.432 Endurance Groups: Not Supported 00:28:57.432 Predictable Latency Mode: Not Supported 00:28:57.432 Traffic Based Keep ALive: Not Supported 00:28:57.432 Namespace Granularity: Not Supported 00:28:57.432 SQ Associations: Not Supported 00:28:57.432 UUID List: Not Supported 00:28:57.432 Multi-Domain Subsystem: Not Supported 00:28:57.432 Fixed Capacity Management: Not Supported 00:28:57.433 Variable Capacity Management: Not Supported 00:28:57.433 Delete Endurance Group: Not Supported 00:28:57.433 Delete NVM Set: Not Supported 00:28:57.433 Extended LBA Formats Supported: Not Supported 00:28:57.433 Flexible Data Placement Supported: Not Supported 00:28:57.433 00:28:57.433 Controller Memory Buffer Support 00:28:57.433 ================================ 00:28:57.433 Supported: No 00:28:57.433 00:28:57.433 Persistent Memory Region Support 00:28:57.433 ================================ 00:28:57.433 Supported: No 00:28:57.433 00:28:57.433 Admin Command Set Attributes 00:28:57.433 ============================ 00:28:57.433 Security Send/Receive: Not Supported 00:28:57.433 Format NVM: Not Supported 00:28:57.433 Firmware Activate/Download: Not Supported 00:28:57.433 Namespace Management: Not Supported 00:28:57.433 Device Self-Test: Not Supported 00:28:57.433 Directives: Not Supported 00:28:57.433 NVMe-MI: Not Supported 00:28:57.433 Virtualization Management: Not Supported 00:28:57.433 Doorbell Buffer Config: Not Supported 00:28:57.433 Get LBA Status Capability: Not Supported 00:28:57.433 Command & Feature Lockdown Capability: Not Supported 00:28:57.433 Abort Command Limit: 1 00:28:57.433 Async Event Request Limit: 4 00:28:57.433 Number of Firmware Slots: N/A 00:28:57.433 Firmware Slot 1 Read-Only: N/A 00:28:57.433 Firmware Activation Without Reset: N/A 00:28:57.433 Multiple Update Detection Support: N/A 00:28:57.433 Firmware Update Granularity: No Information Provided 00:28:57.433 Per-Namespace SMART Log: No 00:28:57.433 Asymmetric Namespace Access Log Page: Not Supported 00:28:57.433 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:57.433 Command Effects Log Page: Not Supported 00:28:57.433 Get Log Page Extended Data: Supported 00:28:57.433 Telemetry Log Pages: Not Supported 00:28:57.433 Persistent Event Log Pages: Not Supported 00:28:57.433 Supported Log Pages Log Page: May Support 00:28:57.433 Commands Supported & Effects Log Page: Not Supported 00:28:57.433 Feature Identifiers & Effects Log Page:May Support 00:28:57.433 NVMe-MI Commands & Effects Log Page: May Support 00:28:57.433 Data Area 4 for Telemetry Log: Not Supported 00:28:57.433 Error Log Page Entries Supported: 128 00:28:57.433 Keep Alive: Not Supported 00:28:57.433 00:28:57.433 NVM Command Set Attributes 00:28:57.433 ========================== 00:28:57.433 Submission Queue Entry Size 00:28:57.433 Max: 1 00:28:57.433 Min: 1 00:28:57.433 Completion Queue Entry Size 00:28:57.433 Max: 1 00:28:57.433 Min: 1 00:28:57.433 Number of Namespaces: 0 00:28:57.433 Compare Command: Not Supported 00:28:57.433 Write Uncorrectable Command: Not Supported 00:28:57.433 Dataset Management Command: Not Supported 00:28:57.433 Write Zeroes Command: Not Supported 00:28:57.433 Set Features Save Field: Not Supported 00:28:57.433 Reservations: Not Supported 00:28:57.433 Timestamp: Not Supported 00:28:57.433 Copy: Not Supported 00:28:57.433 Volatile Write Cache: Not Present 00:28:57.433 Atomic Write Unit (Normal): 1 00:28:57.433 Atomic Write Unit (PFail): 1 00:28:57.433 Atomic Compare & Write Unit: 1 00:28:57.433 Fused Compare & Write: Supported 00:28:57.433 Scatter-Gather List 00:28:57.433 SGL Command Set: Supported 00:28:57.433 SGL Keyed: Supported 00:28:57.433 SGL Bit Bucket Descriptor: Not Supported 00:28:57.433 SGL Metadata Pointer: Not Supported 00:28:57.433 Oversized SGL: Not Supported 00:28:57.433 SGL Metadata Address: Not Supported 00:28:57.433 SGL Offset: Supported 00:28:57.433 Transport SGL Data Block: Not Supported 00:28:57.433 Replay Protected Memory Block: Not Supported 00:28:57.433 00:28:57.433 Firmware Slot Information 00:28:57.433 ========================= 00:28:57.433 Active slot: 0 00:28:57.433 00:28:57.433 00:28:57.433 Error Log 00:28:57.433 ========= 00:28:57.433 00:28:57.433 Active Namespaces 00:28:57.433 ================= 00:28:57.433 Discovery Log Page 00:28:57.433 ================== 00:28:57.433 Generation Counter: 2 00:28:57.433 Number of Records: 2 00:28:57.433 Record Format: 0 00:28:57.433 00:28:57.433 Discovery Log Entry 0 00:28:57.433 ---------------------- 00:28:57.433 Transport Type: 3 (TCP) 00:28:57.433 Address Family: 1 (IPv4) 00:28:57.433 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:57.433 Entry Flags: 00:28:57.433 Duplicate Returned Information: 1 00:28:57.433 Explicit Persistent Connection Support for Discovery: 1 00:28:57.433 Transport Requirements: 00:28:57.433 Secure Channel: Not Required 00:28:57.433 Port ID: 0 (0x0000) 00:28:57.433 Controller ID: 65535 (0xffff) 00:28:57.433 Admin Max SQ Size: 128 00:28:57.433 Transport Service Identifier: 4420 00:28:57.433 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:57.433 Transport Address: 10.0.0.2 00:28:57.433 Discovery Log Entry 1 00:28:57.433 ---------------------- 00:28:57.433 Transport Type: 3 (TCP) 00:28:57.433 Address Family: 1 (IPv4) 00:28:57.433 Subsystem Type: 2 (NVM Subsystem) 00:28:57.434 Entry Flags: 00:28:57.434 Duplicate Returned Information: 0 00:28:57.434 Explicit Persistent Connection Support for Discovery: 0 00:28:57.434 Transport Requirements: 00:28:57.434 Secure Channel: Not Required 00:28:57.434 Port ID: 0 (0x0000) 00:28:57.434 Controller ID: 65535 (0xffff) 00:28:57.434 Admin Max SQ Size: 128 00:28:57.434 Transport Service Identifier: 4420 00:28:57.434 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:57.434 Transport Address: 10.0.0.2 [2024-06-09 09:07:19.920751] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:57.434 [2024-06-09 09:07:19.920765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.434 [2024-06-09 09:07:19.920773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.434 [2024-06-09 09:07:19.920779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.434 [2024-06-09 09:07:19.920785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.434 [2024-06-09 09:07:19.920793] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.920797] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.920800] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcec0) 00:28:57.434 [2024-06-09 09:07:19.920808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.434 [2024-06-09 09:07:19.920822] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40210, cid 3, qid 0 00:28:57.434 [2024-06-09 09:07:19.921089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.434 [2024-06-09 09:07:19.921096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.434 [2024-06-09 09:07:19.921099] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921103] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40210) on tqpair=0x9bcec0 00:28:57.434 [2024-06-09 09:07:19.921110] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921114] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921117] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcec0) 00:28:57.434 [2024-06-09 09:07:19.921124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.434 [2024-06-09 09:07:19.921137] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40210, cid 3, qid 0 00:28:57.434 [2024-06-09 09:07:19.921412] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.434 [2024-06-09 09:07:19.921419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.434 [2024-06-09 09:07:19.921422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40210) on tqpair=0x9bcec0 00:28:57.434 [2024-06-09 09:07:19.921430] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:57.434 [2024-06-09 09:07:19.921435] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:57.434 [2024-06-09 09:07:19.921444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921451] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcec0) 00:28:57.434 [2024-06-09 09:07:19.921458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.434 [2024-06-09 09:07:19.921469] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40210, cid 3, qid 0 00:28:57.434 [2024-06-09 09:07:19.921700] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.434 [2024-06-09 09:07:19.921706] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.434 [2024-06-09 09:07:19.921710] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921713] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40210) on tqpair=0x9bcec0 00:28:57.434 [2024-06-09 09:07:19.921723] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921730] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcec0) 00:28:57.434 [2024-06-09 09:07:19.921740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.434 [2024-06-09 09:07:19.921751] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40210, cid 3, qid 0 00:28:57.434 [2024-06-09 09:07:19.921976] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.434 [2024-06-09 09:07:19.921983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.434 [2024-06-09 09:07:19.921986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.921990] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40210) on tqpair=0x9bcec0 00:28:57.434 [2024-06-09 09:07:19.921999] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.922003] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.922006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcec0) 00:28:57.434 [2024-06-09 09:07:19.922013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.434 [2024-06-09 09:07:19.922023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40210, cid 3, qid 0 00:28:57.434 [2024-06-09 09:07:19.922230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.434 [2024-06-09 09:07:19.922236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.434 [2024-06-09 09:07:19.922239] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.922243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40210) on tqpair=0x9bcec0 00:28:57.434 [2024-06-09 09:07:19.922252] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.922256] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.922259] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcec0) 00:28:57.434 [2024-06-09 09:07:19.922266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.434 [2024-06-09 09:07:19.922276] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40210, cid 3, qid 0 00:28:57.434 [2024-06-09 09:07:19.926411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.434 [2024-06-09 09:07:19.926420] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.434 [2024-06-09 09:07:19.926423] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.926427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40210) on tqpair=0x9bcec0 00:28:57.434 [2024-06-09 09:07:19.926437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.926441] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.434 [2024-06-09 09:07:19.926444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcec0) 00:28:57.434 [2024-06-09 09:07:19.926451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.434 [2024-06-09 09:07:19.926463] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa40210, cid 3, qid 0 00:28:57.434 [2024-06-09 09:07:19.926691] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.434 [2024-06-09 09:07:19.926698] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.434 [2024-06-09 09:07:19.926701] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.435 [2024-06-09 09:07:19.926705] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa40210) on tqpair=0x9bcec0 00:28:57.435 [2024-06-09 09:07:19.926712] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:57.435 00:28:57.435 09:07:19 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:57.435 [2024-06-09 09:07:19.964729] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:57.435 [2024-06-09 09:07:19.964795] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738119 ] 00:28:57.435 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.700 [2024-06-09 09:07:19.998953] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:57.700 [2024-06-09 09:07:19.998996] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:57.701 [2024-06-09 09:07:19.999001] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:57.701 [2024-06-09 09:07:19.999012] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:57.701 [2024-06-09 09:07:19.999019] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:57.701 [2024-06-09 09:07:20.002485] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:57.701 [2024-06-09 09:07:20.002510] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcd9ec0 0 00:28:57.701 [2024-06-09 09:07:20.002842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:57.701 [2024-06-09 09:07:20.002855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:57.701 [2024-06-09 09:07:20.002861] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:57.701 [2024-06-09 09:07:20.002864] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:57.701 [2024-06-09 09:07:20.002897] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.002903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.002907] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.701 [2024-06-09 09:07:20.002919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:57.701 [2024-06-09 09:07:20.002935] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.701 [2024-06-09 09:07:20.009413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.701 [2024-06-09 09:07:20.009422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.701 [2024-06-09 09:07:20.009426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.009430] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.701 [2024-06-09 09:07:20.009442] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:57.701 [2024-06-09 09:07:20.009448] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:57.701 [2024-06-09 09:07:20.009454] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:57.701 [2024-06-09 09:07:20.009464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.009468] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.009472] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.701 [2024-06-09 09:07:20.009480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.701 [2024-06-09 09:07:20.009493] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.701 [2024-06-09 09:07:20.009711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.701 [2024-06-09 09:07:20.009718] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.701 [2024-06-09 09:07:20.009725] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.009729] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.701 [2024-06-09 09:07:20.009735] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:57.701 [2024-06-09 09:07:20.009743] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:57.701 [2024-06-09 09:07:20.009750] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.009754] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.009757] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.701 [2024-06-09 09:07:20.009765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.701 [2024-06-09 09:07:20.009776] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.701 [2024-06-09 09:07:20.010121] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.701 [2024-06-09 09:07:20.010128] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.701 [2024-06-09 09:07:20.010131] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010135] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.701 [2024-06-09 09:07:20.010140] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:57.701 [2024-06-09 09:07:20.010147] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:57.701 [2024-06-09 09:07:20.010154] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010161] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.701 [2024-06-09 09:07:20.010168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.701 [2024-06-09 09:07:20.010178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.701 [2024-06-09 09:07:20.010519] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.701 [2024-06-09 09:07:20.010525] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.701 [2024-06-09 09:07:20.010529] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010532] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.701 [2024-06-09 09:07:20.010537] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:57.701 [2024-06-09 09:07:20.010546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010550] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.701 [2024-06-09 09:07:20.010561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.701 [2024-06-09 09:07:20.010571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.701 [2024-06-09 09:07:20.010815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.701 [2024-06-09 09:07:20.010822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.701 [2024-06-09 09:07:20.010826] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010829] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.701 [2024-06-09 09:07:20.010837] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:57.701 [2024-06-09 09:07:20.010842] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:57.701 [2024-06-09 09:07:20.010850] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:57.701 [2024-06-09 09:07:20.010955] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:57.701 [2024-06-09 09:07:20.010959] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:57.701 [2024-06-09 09:07:20.010967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010970] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.010974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.701 [2024-06-09 09:07:20.010981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.701 [2024-06-09 09:07:20.010992] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.701 [2024-06-09 09:07:20.011229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.701 [2024-06-09 09:07:20.011236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.701 [2024-06-09 09:07:20.011239] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.011243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.701 [2024-06-09 09:07:20.011248] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:57.701 [2024-06-09 09:07:20.011257] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.011261] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.011265] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.701 [2024-06-09 09:07:20.011272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.701 [2024-06-09 09:07:20.011282] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.701 [2024-06-09 09:07:20.011588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.701 [2024-06-09 09:07:20.011594] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.701 [2024-06-09 09:07:20.011598] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.701 [2024-06-09 09:07:20.011601] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.701 [2024-06-09 09:07:20.011606] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:57.701 [2024-06-09 09:07:20.011610] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.011618] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:57.702 [2024-06-09 09:07:20.011630] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.011639] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.011643] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.011650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.702 [2024-06-09 09:07:20.011660] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.702 [2024-06-09 09:07:20.011922] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.702 [2024-06-09 09:07:20.011930] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.702 [2024-06-09 09:07:20.011934] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.011937] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd9ec0): datao=0, datal=4096, cccid=0 00:28:57.702 [2024-06-09 09:07:20.011942] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5cdf0) on tqpair(0xcd9ec0): expected_datao=0, payload_size=4096 00:28:57.702 [2024-06-09 09:07:20.011947] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.012082] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.012086] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.702 [2024-06-09 09:07:20.052645] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.702 [2024-06-09 09:07:20.052649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.702 [2024-06-09 09:07:20.052661] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:57.702 [2024-06-09 09:07:20.052666] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:57.702 [2024-06-09 09:07:20.052670] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:57.702 [2024-06-09 09:07:20.052674] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:57.702 [2024-06-09 09:07:20.052679] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:57.702 [2024-06-09 09:07:20.052684] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.052697] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.052706] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052711] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.052722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:57.702 [2024-06-09 09:07:20.052735] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.702 [2024-06-09 09:07:20.052883] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.702 [2024-06-09 09:07:20.052890] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.702 [2024-06-09 09:07:20.052894] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052898] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5cdf0) on tqpair=0xcd9ec0 00:28:57.702 [2024-06-09 09:07:20.052907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052911] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052914] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.052921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.702 [2024-06-09 09:07:20.052927] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052931] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052934] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.052942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.702 [2024-06-09 09:07:20.052949] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.052961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.702 [2024-06-09 09:07:20.052967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052971] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.052975] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.052980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.702 [2024-06-09 09:07:20.052985] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.052994] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.053000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.053004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.053011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.702 [2024-06-09 09:07:20.053024] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cdf0, cid 0, qid 0 00:28:57.702 [2024-06-09 09:07:20.053029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5cf50, cid 1, qid 0 00:28:57.702 [2024-06-09 09:07:20.053034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d0b0, cid 2, qid 0 00:28:57.702 [2024-06-09 09:07:20.053039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.702 [2024-06-09 09:07:20.053043] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d370, cid 4, qid 0 00:28:57.702 [2024-06-09 09:07:20.053304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.702 [2024-06-09 09:07:20.053311] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.702 [2024-06-09 09:07:20.053314] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.053318] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d370) on tqpair=0xcd9ec0 00:28:57.702 [2024-06-09 09:07:20.053323] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:57.702 [2024-06-09 09:07:20.053330] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.053338] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.053344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.053351] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.053355] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.053358] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.053364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:57.702 [2024-06-09 09:07:20.053378] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d370, cid 4, qid 0 00:28:57.702 [2024-06-09 09:07:20.057410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.702 [2024-06-09 09:07:20.057418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.702 [2024-06-09 09:07:20.057421] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.057425] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d370) on tqpair=0xcd9ec0 00:28:57.702 [2024-06-09 09:07:20.057479] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.057489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:57.702 [2024-06-09 09:07:20.057496] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.057500] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd9ec0) 00:28:57.702 [2024-06-09 09:07:20.057506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.702 [2024-06-09 09:07:20.057518] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d370, cid 4, qid 0 00:28:57.702 [2024-06-09 09:07:20.057729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.702 [2024-06-09 09:07:20.057736] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.702 [2024-06-09 09:07:20.057740] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.702 [2024-06-09 09:07:20.057744] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd9ec0): datao=0, datal=4096, cccid=4 00:28:57.703 [2024-06-09 09:07:20.057748] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5d370) on tqpair(0xcd9ec0): expected_datao=0, payload_size=4096 00:28:57.703 [2024-06-09 09:07:20.057753] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.057824] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.057829] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.703 [2024-06-09 09:07:20.058160] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.703 [2024-06-09 09:07:20.058164] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058167] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d370) on tqpair=0xcd9ec0 00:28:57.703 [2024-06-09 09:07:20.058180] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:57.703 [2024-06-09 09:07:20.058188] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.058198] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.058205] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058208] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd9ec0) 00:28:57.703 [2024-06-09 09:07:20.058215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.703 [2024-06-09 09:07:20.058226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d370, cid 4, qid 0 00:28:57.703 [2024-06-09 09:07:20.058504] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.703 [2024-06-09 09:07:20.058513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.703 [2024-06-09 09:07:20.058517] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058520] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd9ec0): datao=0, datal=4096, cccid=4 00:28:57.703 [2024-06-09 09:07:20.058525] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5d370) on tqpair(0xcd9ec0): expected_datao=0, payload_size=4096 00:28:57.703 [2024-06-09 09:07:20.058534] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058541] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058544] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058707] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.703 [2024-06-09 09:07:20.058714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.703 [2024-06-09 09:07:20.058717] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d370) on tqpair=0xcd9ec0 00:28:57.703 [2024-06-09 09:07:20.058731] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.058740] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.058748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.058752] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd9ec0) 00:28:57.703 [2024-06-09 09:07:20.058758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.703 [2024-06-09 09:07:20.058771] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d370, cid 4, qid 0 00:28:57.703 [2024-06-09 09:07:20.059006] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.703 [2024-06-09 09:07:20.059013] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.703 [2024-06-09 09:07:20.059017] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059020] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd9ec0): datao=0, datal=4096, cccid=4 00:28:57.703 [2024-06-09 09:07:20.059024] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5d370) on tqpair(0xcd9ec0): expected_datao=0, payload_size=4096 00:28:57.703 [2024-06-09 09:07:20.059029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059035] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059039] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.703 [2024-06-09 09:07:20.059242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.703 [2024-06-09 09:07:20.059245] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d370) on tqpair=0xcd9ec0 00:28:57.703 [2024-06-09 09:07:20.059260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.059268] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.059275] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.059281] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.059286] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.059291] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:57.703 [2024-06-09 09:07:20.059296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:57.703 [2024-06-09 09:07:20.059303] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:57.703 [2024-06-09 09:07:20.059319] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059324] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd9ec0) 00:28:57.703 [2024-06-09 09:07:20.059330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.703 [2024-06-09 09:07:20.059337] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd9ec0) 00:28:57.703 [2024-06-09 09:07:20.059351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.703 [2024-06-09 09:07:20.059364] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d370, cid 4, qid 0 00:28:57.703 [2024-06-09 09:07:20.059370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d4d0, cid 5, qid 0 00:28:57.703 [2024-06-09 09:07:20.059587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.703 [2024-06-09 09:07:20.059594] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.703 [2024-06-09 09:07:20.059598] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059602] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d370) on tqpair=0xcd9ec0 00:28:57.703 [2024-06-09 09:07:20.059608] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.703 [2024-06-09 09:07:20.059614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.703 [2024-06-09 09:07:20.059618] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059621] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d4d0) on tqpair=0xcd9ec0 00:28:57.703 [2024-06-09 09:07:20.059630] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd9ec0) 00:28:57.703 [2024-06-09 09:07:20.059641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.703 [2024-06-09 09:07:20.059652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d4d0, cid 5, qid 0 00:28:57.703 [2024-06-09 09:07:20.059889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.703 [2024-06-09 09:07:20.059895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.703 [2024-06-09 09:07:20.059899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059902] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d4d0) on tqpair=0xcd9ec0 00:28:57.703 [2024-06-09 09:07:20.059911] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.059915] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd9ec0) 00:28:57.703 [2024-06-09 09:07:20.059922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.703 [2024-06-09 09:07:20.059931] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d4d0, cid 5, qid 0 00:28:57.703 [2024-06-09 09:07:20.060248] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.703 [2024-06-09 09:07:20.060254] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.703 [2024-06-09 09:07:20.060257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.703 [2024-06-09 09:07:20.060261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d4d0) on tqpair=0xcd9ec0 00:28:57.703 [2024-06-09 09:07:20.060270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.060274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd9ec0) 00:28:57.704 [2024-06-09 09:07:20.060283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.704 [2024-06-09 09:07:20.060293] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d4d0, cid 5, qid 0 00:28:57.704 [2024-06-09 09:07:20.060527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.704 [2024-06-09 09:07:20.060534] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.704 [2024-06-09 09:07:20.060538] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.060542] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d4d0) on tqpair=0xcd9ec0 00:28:57.704 [2024-06-09 09:07:20.060553] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.060557] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd9ec0) 00:28:57.704 [2024-06-09 09:07:20.060564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.704 [2024-06-09 09:07:20.060571] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.060575] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd9ec0) 00:28:57.704 [2024-06-09 09:07:20.060581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.704 [2024-06-09 09:07:20.060588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.060591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xcd9ec0) 00:28:57.704 [2024-06-09 09:07:20.060597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.704 [2024-06-09 09:07:20.060608] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.060612] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcd9ec0) 00:28:57.704 [2024-06-09 09:07:20.060618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.704 [2024-06-09 09:07:20.060630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d4d0, cid 5, qid 0 00:28:57.704 [2024-06-09 09:07:20.060635] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d370, cid 4, qid 0 00:28:57.704 [2024-06-09 09:07:20.060640] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d630, cid 6, qid 0 00:28:57.704 [2024-06-09 09:07:20.060644] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d790, cid 7, qid 0 00:28:57.704 [2024-06-09 09:07:20.061061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.704 [2024-06-09 09:07:20.061068] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.704 [2024-06-09 09:07:20.061071] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.061075] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd9ec0): datao=0, datal=8192, cccid=5 00:28:57.704 [2024-06-09 09:07:20.061079] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5d4d0) on tqpair(0xcd9ec0): expected_datao=0, payload_size=8192 00:28:57.704 [2024-06-09 09:07:20.061083] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.061386] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.061390] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.061396] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.704 [2024-06-09 09:07:20.065408] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.704 [2024-06-09 09:07:20.065412] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065416] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd9ec0): datao=0, datal=512, cccid=4 00:28:57.704 [2024-06-09 09:07:20.065424] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5d370) on tqpair(0xcd9ec0): expected_datao=0, payload_size=512 00:28:57.704 [2024-06-09 09:07:20.065428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065434] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065438] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.704 [2024-06-09 09:07:20.065449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.704 [2024-06-09 09:07:20.065452] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065456] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd9ec0): datao=0, datal=512, cccid=6 00:28:57.704 [2024-06-09 09:07:20.065460] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5d630) on tqpair(0xcd9ec0): expected_datao=0, payload_size=512 00:28:57.704 [2024-06-09 09:07:20.065464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065470] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065474] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065479] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:57.704 [2024-06-09 09:07:20.065485] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:57.704 [2024-06-09 09:07:20.065488] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065492] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd9ec0): datao=0, datal=4096, cccid=7 00:28:57.704 [2024-06-09 09:07:20.065496] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5d790) on tqpair(0xcd9ec0): expected_datao=0, payload_size=4096 00:28:57.704 [2024-06-09 09:07:20.065500] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065507] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065510] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.704 [2024-06-09 09:07:20.065523] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.704 [2024-06-09 09:07:20.065527] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d4d0) on tqpair=0xcd9ec0 00:28:57.704 [2024-06-09 09:07:20.065543] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.704 [2024-06-09 09:07:20.065549] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.704 [2024-06-09 09:07:20.065552] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065556] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d370) on tqpair=0xcd9ec0 00:28:57.704 [2024-06-09 09:07:20.065564] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.704 [2024-06-09 09:07:20.065570] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.704 [2024-06-09 09:07:20.065574] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065577] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d630) on tqpair=0xcd9ec0 00:28:57.704 [2024-06-09 09:07:20.065586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.704 [2024-06-09 09:07:20.065591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.704 [2024-06-09 09:07:20.065595] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.704 [2024-06-09 09:07:20.065598] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d790) on tqpair=0xcd9ec0 00:28:57.704 ===================================================== 00:28:57.704 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.704 ===================================================== 00:28:57.704 Controller Capabilities/Features 00:28:57.704 ================================ 00:28:57.704 Vendor ID: 8086 00:28:57.704 Subsystem Vendor ID: 8086 00:28:57.704 Serial Number: SPDK00000000000001 00:28:57.704 Model Number: SPDK bdev Controller 00:28:57.704 Firmware Version: 24.09 00:28:57.704 Recommended Arb Burst: 6 00:28:57.704 IEEE OUI Identifier: e4 d2 5c 00:28:57.704 Multi-path I/O 00:28:57.704 May have multiple subsystem ports: Yes 00:28:57.704 May have multiple controllers: Yes 00:28:57.704 Associated with SR-IOV VF: No 00:28:57.704 Max Data Transfer Size: 131072 00:28:57.704 Max Number of Namespaces: 32 00:28:57.704 Max Number of I/O Queues: 127 00:28:57.704 NVMe Specification Version (VS): 1.3 00:28:57.704 NVMe Specification Version (Identify): 1.3 00:28:57.704 Maximum Queue Entries: 128 00:28:57.704 Contiguous Queues Required: Yes 00:28:57.704 Arbitration Mechanisms Supported 00:28:57.704 Weighted Round Robin: Not Supported 00:28:57.704 Vendor Specific: Not Supported 00:28:57.704 Reset Timeout: 15000 ms 00:28:57.704 Doorbell Stride: 4 bytes 00:28:57.705 NVM Subsystem Reset: Not Supported 00:28:57.705 Command Sets Supported 00:28:57.705 NVM Command Set: Supported 00:28:57.705 Boot Partition: Not Supported 00:28:57.705 Memory Page Size Minimum: 4096 bytes 00:28:57.705 Memory Page Size Maximum: 4096 bytes 00:28:57.705 Persistent Memory Region: Not Supported 00:28:57.705 Optional Asynchronous Events Supported 00:28:57.705 Namespace Attribute Notices: Supported 00:28:57.705 Firmware Activation Notices: Not Supported 00:28:57.705 ANA Change Notices: Not Supported 00:28:57.705 PLE Aggregate Log Change Notices: Not Supported 00:28:57.705 LBA Status Info Alert Notices: Not Supported 00:28:57.705 EGE Aggregate Log Change Notices: Not Supported 00:28:57.705 Normal NVM Subsystem Shutdown event: Not Supported 00:28:57.705 Zone Descriptor Change Notices: Not Supported 00:28:57.705 Discovery Log Change Notices: Not Supported 00:28:57.705 Controller Attributes 00:28:57.705 128-bit Host Identifier: Supported 00:28:57.705 Non-Operational Permissive Mode: Not Supported 00:28:57.705 NVM Sets: Not Supported 00:28:57.705 Read Recovery Levels: Not Supported 00:28:57.705 Endurance Groups: Not Supported 00:28:57.705 Predictable Latency Mode: Not Supported 00:28:57.705 Traffic Based Keep ALive: Not Supported 00:28:57.705 Namespace Granularity: Not Supported 00:28:57.705 SQ Associations: Not Supported 00:28:57.705 UUID List: Not Supported 00:28:57.705 Multi-Domain Subsystem: Not Supported 00:28:57.705 Fixed Capacity Management: Not Supported 00:28:57.705 Variable Capacity Management: Not Supported 00:28:57.705 Delete Endurance Group: Not Supported 00:28:57.705 Delete NVM Set: Not Supported 00:28:57.705 Extended LBA Formats Supported: Not Supported 00:28:57.705 Flexible Data Placement Supported: Not Supported 00:28:57.705 00:28:57.705 Controller Memory Buffer Support 00:28:57.705 ================================ 00:28:57.705 Supported: No 00:28:57.705 00:28:57.705 Persistent Memory Region Support 00:28:57.705 ================================ 00:28:57.705 Supported: No 00:28:57.705 00:28:57.705 Admin Command Set Attributes 00:28:57.705 ============================ 00:28:57.705 Security Send/Receive: Not Supported 00:28:57.705 Format NVM: Not Supported 00:28:57.705 Firmware Activate/Download: Not Supported 00:28:57.705 Namespace Management: Not Supported 00:28:57.705 Device Self-Test: Not Supported 00:28:57.705 Directives: Not Supported 00:28:57.705 NVMe-MI: Not Supported 00:28:57.705 Virtualization Management: Not Supported 00:28:57.705 Doorbell Buffer Config: Not Supported 00:28:57.705 Get LBA Status Capability: Not Supported 00:28:57.705 Command & Feature Lockdown Capability: Not Supported 00:28:57.705 Abort Command Limit: 4 00:28:57.705 Async Event Request Limit: 4 00:28:57.705 Number of Firmware Slots: N/A 00:28:57.705 Firmware Slot 1 Read-Only: N/A 00:28:57.705 Firmware Activation Without Reset: N/A 00:28:57.705 Multiple Update Detection Support: N/A 00:28:57.705 Firmware Update Granularity: No Information Provided 00:28:57.705 Per-Namespace SMART Log: No 00:28:57.705 Asymmetric Namespace Access Log Page: Not Supported 00:28:57.705 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:57.705 Command Effects Log Page: Supported 00:28:57.705 Get Log Page Extended Data: Supported 00:28:57.705 Telemetry Log Pages: Not Supported 00:28:57.705 Persistent Event Log Pages: Not Supported 00:28:57.705 Supported Log Pages Log Page: May Support 00:28:57.705 Commands Supported & Effects Log Page: Not Supported 00:28:57.705 Feature Identifiers & Effects Log Page:May Support 00:28:57.705 NVMe-MI Commands & Effects Log Page: May Support 00:28:57.705 Data Area 4 for Telemetry Log: Not Supported 00:28:57.705 Error Log Page Entries Supported: 128 00:28:57.705 Keep Alive: Supported 00:28:57.705 Keep Alive Granularity: 10000 ms 00:28:57.705 00:28:57.705 NVM Command Set Attributes 00:28:57.705 ========================== 00:28:57.705 Submission Queue Entry Size 00:28:57.705 Max: 64 00:28:57.705 Min: 64 00:28:57.705 Completion Queue Entry Size 00:28:57.705 Max: 16 00:28:57.705 Min: 16 00:28:57.705 Number of Namespaces: 32 00:28:57.705 Compare Command: Supported 00:28:57.705 Write Uncorrectable Command: Not Supported 00:28:57.705 Dataset Management Command: Supported 00:28:57.705 Write Zeroes Command: Supported 00:28:57.705 Set Features Save Field: Not Supported 00:28:57.705 Reservations: Supported 00:28:57.705 Timestamp: Not Supported 00:28:57.705 Copy: Supported 00:28:57.705 Volatile Write Cache: Present 00:28:57.705 Atomic Write Unit (Normal): 1 00:28:57.705 Atomic Write Unit (PFail): 1 00:28:57.705 Atomic Compare & Write Unit: 1 00:28:57.705 Fused Compare & Write: Supported 00:28:57.705 Scatter-Gather List 00:28:57.705 SGL Command Set: Supported 00:28:57.705 SGL Keyed: Supported 00:28:57.705 SGL Bit Bucket Descriptor: Not Supported 00:28:57.705 SGL Metadata Pointer: Not Supported 00:28:57.705 Oversized SGL: Not Supported 00:28:57.705 SGL Metadata Address: Not Supported 00:28:57.705 SGL Offset: Supported 00:28:57.705 Transport SGL Data Block: Not Supported 00:28:57.705 Replay Protected Memory Block: Not Supported 00:28:57.705 00:28:57.705 Firmware Slot Information 00:28:57.705 ========================= 00:28:57.705 Active slot: 1 00:28:57.705 Slot 1 Firmware Revision: 24.09 00:28:57.705 00:28:57.705 00:28:57.705 Commands Supported and Effects 00:28:57.705 ============================== 00:28:57.705 Admin Commands 00:28:57.705 -------------- 00:28:57.705 Get Log Page (02h): Supported 00:28:57.705 Identify (06h): Supported 00:28:57.705 Abort (08h): Supported 00:28:57.705 Set Features (09h): Supported 00:28:57.705 Get Features (0Ah): Supported 00:28:57.705 Asynchronous Event Request (0Ch): Supported 00:28:57.705 Keep Alive (18h): Supported 00:28:57.705 I/O Commands 00:28:57.705 ------------ 00:28:57.705 Flush (00h): Supported LBA-Change 00:28:57.705 Write (01h): Supported LBA-Change 00:28:57.705 Read (02h): Supported 00:28:57.705 Compare (05h): Supported 00:28:57.705 Write Zeroes (08h): Supported LBA-Change 00:28:57.705 Dataset Management (09h): Supported LBA-Change 00:28:57.705 Copy (19h): Supported LBA-Change 00:28:57.705 Unknown (79h): Supported LBA-Change 00:28:57.705 Unknown (7Ah): Supported 00:28:57.705 00:28:57.705 Error Log 00:28:57.705 ========= 00:28:57.705 00:28:57.705 Arbitration 00:28:57.705 =========== 00:28:57.705 Arbitration Burst: 1 00:28:57.705 00:28:57.705 Power Management 00:28:57.705 ================ 00:28:57.705 Number of Power States: 1 00:28:57.705 Current Power State: Power State #0 00:28:57.705 Power State #0: 00:28:57.705 Max Power: 0.00 W 00:28:57.705 Non-Operational State: Operational 00:28:57.705 Entry Latency: Not Reported 00:28:57.705 Exit Latency: Not Reported 00:28:57.705 Relative Read Throughput: 0 00:28:57.705 Relative Read Latency: 0 00:28:57.705 Relative Write Throughput: 0 00:28:57.705 Relative Write Latency: 0 00:28:57.705 Idle Power: Not Reported 00:28:57.705 Active Power: Not Reported 00:28:57.706 Non-Operational Permissive Mode: Not Supported 00:28:57.706 00:28:57.706 Health Information 00:28:57.706 ================== 00:28:57.706 Critical Warnings: 00:28:57.706 Available Spare Space: OK 00:28:57.706 Temperature: OK 00:28:57.706 Device Reliability: OK 00:28:57.706 Read Only: No 00:28:57.706 Volatile Memory Backup: OK 00:28:57.706 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:57.706 Temperature Threshold: [2024-06-09 09:07:20.065700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.065706] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcd9ec0) 00:28:57.706 [2024-06-09 09:07:20.065714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.706 [2024-06-09 09:07:20.065726] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d790, cid 7, qid 0 00:28:57.706 [2024-06-09 09:07:20.065963] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.706 [2024-06-09 09:07:20.065970] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.706 [2024-06-09 09:07:20.065974] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.065977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d790) on tqpair=0xcd9ec0 00:28:57.706 [2024-06-09 09:07:20.066005] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:57.706 [2024-06-09 09:07:20.066018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.706 [2024-06-09 09:07:20.066024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.706 [2024-06-09 09:07:20.066030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.706 [2024-06-09 09:07:20.066036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.706 [2024-06-09 09:07:20.066044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066048] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.706 [2024-06-09 09:07:20.066058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.706 [2024-06-09 09:07:20.066071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.706 [2024-06-09 09:07:20.066273] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.706 [2024-06-09 09:07:20.066280] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.706 [2024-06-09 09:07:20.066284] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066287] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.706 [2024-06-09 09:07:20.066294] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066298] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066301] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.706 [2024-06-09 09:07:20.066308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.706 [2024-06-09 09:07:20.066321] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.706 [2024-06-09 09:07:20.066550] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.706 [2024-06-09 09:07:20.066557] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.706 [2024-06-09 09:07:20.066561] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066564] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.706 [2024-06-09 09:07:20.066570] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:57.706 [2024-06-09 09:07:20.066574] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:57.706 [2024-06-09 09:07:20.066583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066590] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.706 [2024-06-09 09:07:20.066600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.706 [2024-06-09 09:07:20.066611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.706 [2024-06-09 09:07:20.066820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.706 [2024-06-09 09:07:20.066827] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.706 [2024-06-09 09:07:20.066830] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066834] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.706 [2024-06-09 09:07:20.066844] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066848] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.066851] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.706 [2024-06-09 09:07:20.066858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.706 [2024-06-09 09:07:20.066868] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.706 [2024-06-09 09:07:20.067108] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.706 [2024-06-09 09:07:20.067115] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.706 [2024-06-09 09:07:20.067119] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.067122] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.706 [2024-06-09 09:07:20.067132] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.067136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.067139] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.706 [2024-06-09 09:07:20.067146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.706 [2024-06-09 09:07:20.067156] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.706 [2024-06-09 09:07:20.067508] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.706 [2024-06-09 09:07:20.067515] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.706 [2024-06-09 09:07:20.067518] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.067522] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.706 [2024-06-09 09:07:20.067531] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.067535] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.706 [2024-06-09 09:07:20.067539] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.706 [2024-06-09 09:07:20.067546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.706 [2024-06-09 09:07:20.067555] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.706 [2024-06-09 09:07:20.067787] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.706 [2024-06-09 09:07:20.067794] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.706 [2024-06-09 09:07:20.067798] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.067801] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.707 [2024-06-09 09:07:20.067811] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.067814] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.067818] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.707 [2024-06-09 09:07:20.067825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.707 [2024-06-09 09:07:20.067837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.707 [2024-06-09 09:07:20.068066] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.707 [2024-06-09 09:07:20.068073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.707 [2024-06-09 09:07:20.068076] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068080] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.707 [2024-06-09 09:07:20.068089] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068097] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.707 [2024-06-09 09:07:20.068104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.707 [2024-06-09 09:07:20.068113] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.707 [2024-06-09 09:07:20.068356] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.707 [2024-06-09 09:07:20.068363] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.707 [2024-06-09 09:07:20.068366] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068370] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.707 [2024-06-09 09:07:20.068380] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068384] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068387] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.707 [2024-06-09 09:07:20.068394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.707 [2024-06-09 09:07:20.068410] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.707 [2024-06-09 09:07:20.068625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.707 [2024-06-09 09:07:20.068632] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.707 [2024-06-09 09:07:20.068635] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068639] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.707 [2024-06-09 09:07:20.068648] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068656] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.707 [2024-06-09 09:07:20.068662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.707 [2024-06-09 09:07:20.068672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.707 [2024-06-09 09:07:20.068909] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.707 [2024-06-09 09:07:20.068916] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.707 [2024-06-09 09:07:20.068919] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068923] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.707 [2024-06-09 09:07:20.068932] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068936] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.068940] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.707 [2024-06-09 09:07:20.068947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.707 [2024-06-09 09:07:20.068957] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.707 [2024-06-09 09:07:20.069293] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.707 [2024-06-09 09:07:20.069300] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.707 [2024-06-09 09:07:20.069303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.069307] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.707 [2024-06-09 09:07:20.069316] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.069320] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.069324] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd9ec0) 00:28:57.707 [2024-06-09 09:07:20.069330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.707 [2024-06-09 09:07:20.069340] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5d210, cid 3, qid 0 00:28:57.707 [2024-06-09 09:07:20.073411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:57.707 [2024-06-09 09:07:20.073419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:57.707 [2024-06-09 09:07:20.073422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:57.707 [2024-06-09 09:07:20.073426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd5d210) on tqpair=0xcd9ec0 00:28:57.707 [2024-06-09 09:07:20.073434] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:28:57.707 0 Kelvin (-273 Celsius) 00:28:57.707 Available Spare: 0% 00:28:57.707 Available Spare Threshold: 0% 00:28:57.707 Life Percentage Used: 0% 00:28:57.707 Data Units Read: 0 00:28:57.707 Data Units Written: 0 00:28:57.707 Host Read Commands: 0 00:28:57.707 Host Write Commands: 0 00:28:57.707 Controller Busy Time: 0 minutes 00:28:57.707 Power Cycles: 0 00:28:57.707 Power On Hours: 0 hours 00:28:57.707 Unsafe Shutdowns: 0 00:28:57.707 Unrecoverable Media Errors: 0 00:28:57.707 Lifetime Error Log Entries: 0 00:28:57.707 Warning Temperature Time: 0 minutes 00:28:57.707 Critical Temperature Time: 0 minutes 00:28:57.707 00:28:57.707 Number of Queues 00:28:57.707 ================ 00:28:57.707 Number of I/O Submission Queues: 127 00:28:57.707 Number of I/O Completion Queues: 127 00:28:57.707 00:28:57.707 Active Namespaces 00:28:57.707 ================= 00:28:57.707 Namespace ID:1 00:28:57.707 Error Recovery Timeout: Unlimited 00:28:57.707 Command Set Identifier: NVM (00h) 00:28:57.707 Deallocate: Supported 00:28:57.707 Deallocated/Unwritten Error: Not Supported 00:28:57.707 Deallocated Read Value: Unknown 00:28:57.707 Deallocate in Write Zeroes: Not Supported 00:28:57.707 Deallocated Guard Field: 0xFFFF 00:28:57.707 Flush: Supported 00:28:57.707 Reservation: Supported 00:28:57.707 Namespace Sharing Capabilities: Multiple Controllers 00:28:57.707 Size (in LBAs): 131072 (0GiB) 00:28:57.707 Capacity (in LBAs): 131072 (0GiB) 00:28:57.707 Utilization (in LBAs): 131072 (0GiB) 00:28:57.707 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:57.707 EUI64: ABCDEF0123456789 00:28:57.707 UUID: 556b9a5c-d6d5-429f-ae88-0d3017f4f805 00:28:57.707 Thin Provisioning: Not Supported 00:28:57.707 Per-NS Atomic Units: Yes 00:28:57.707 Atomic Boundary Size (Normal): 0 00:28:57.707 Atomic Boundary Size (PFail): 0 00:28:57.707 Atomic Boundary Offset: 0 00:28:57.707 Maximum Single Source Range Length: 65535 00:28:57.707 Maximum Copy Length: 65535 00:28:57.707 Maximum Source Range Count: 1 00:28:57.707 NGUID/EUI64 Never Reused: No 00:28:57.707 Namespace Write Protected: No 00:28:57.707 Number of LBA Formats: 1 00:28:57.707 Current LBA Format: LBA Format #00 00:28:57.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:57.707 00:28:57.707 09:07:20 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:57.707 09:07:20 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.707 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.707 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:57.707 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.708 rmmod nvme_tcp 00:28:57.708 rmmod nvme_fabrics 00:28:57.708 rmmod nvme_keyring 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2737770 ']' 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2737770 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 2737770 ']' 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 2737770 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2737770 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2737770' 00:28:57.708 killing process with pid 2737770 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 2737770 00:28:57.708 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 2737770 00:28:57.970 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:57.970 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:57.970 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:57.970 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.970 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.970 09:07:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.970 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.970 09:07:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.886 09:07:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.886 00:28:59.886 real 0m11.020s 00:28:59.886 user 0m7.843s 00:28:59.886 sys 0m5.783s 00:28:59.886 09:07:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:59.886 09:07:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:59.886 ************************************ 00:28:59.886 END TEST nvmf_identify 00:28:59.886 ************************************ 00:29:00.148 09:07:22 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:00.148 09:07:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:00.148 09:07:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:00.148 09:07:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.148 ************************************ 00:29:00.148 START TEST nvmf_perf 00:29:00.148 ************************************ 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:00.148 * Looking for test storage... 00:29:00.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.148 09:07:22 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:00.149 09:07:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:08.336 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.336 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:08.337 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:08.337 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:08.337 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:08.337 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:08.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:29:08.337 00:29:08.337 --- 10.0.0.2 ping statistics --- 00:29:08.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.337 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:29:08.337 00:29:08.337 --- 10.0.0.1 ping statistics --- 00:29:08.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.337 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2742139 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2742139 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 2742139 ']' 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:08.337 09:07:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:08.337 [2024-06-09 09:07:29.865964] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:08.337 [2024-06-09 09:07:29.866026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.337 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.337 [2024-06-09 09:07:29.936184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:08.337 [2024-06-09 09:07:30.012746] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.337 [2024-06-09 09:07:30.012787] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.337 [2024-06-09 09:07:30.012795] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.337 [2024-06-09 09:07:30.012801] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.337 [2024-06-09 09:07:30.012807] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.337 [2024-06-09 09:07:30.012953] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.337 [2024-06-09 09:07:30.013080] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.337 [2024-06-09 09:07:30.013238] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.337 [2024-06-09 09:07:30.013240] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.337 09:07:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:08.337 09:07:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:29:08.337 09:07:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.337 09:07:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:08.337 09:07:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:08.337 09:07:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.337 09:07:30 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:08.337 09:07:30 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:08.909 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:08.909 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:08.909 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:29:08.909 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:09.169 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:09.169 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:29:09.169 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:09.169 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:09.169 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:09.169 [2024-06-09 09:07:31.657420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.169 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.430 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:09.430 09:07:31 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.690 09:07:32 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:09.690 09:07:32 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:09.690 09:07:32 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.950 [2024-06-09 09:07:32.335960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.950 09:07:32 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:10.211 09:07:32 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:29:10.211 09:07:32 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:10.211 09:07:32 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:10.211 09:07:32 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:11.593 Initializing NVMe Controllers 00:29:11.593 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:29:11.593 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:29:11.593 Initialization complete. Launching workers. 00:29:11.593 ======================================================== 00:29:11.593 Latency(us) 00:29:11.593 Device Information : IOPS MiB/s Average min max 00:29:11.593 PCIE (0000:65:00.0) NSID 1 from core 0: 79843.83 311.89 401.34 13.27 5076.92 00:29:11.593 ======================================================== 00:29:11.593 Total : 79843.83 311.89 401.34 13.27 5076.92 00:29:11.593 00:29:11.593 09:07:33 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.593 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.534 Initializing NVMe Controllers 00:29:12.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:12.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:12.534 Initialization complete. Launching workers. 00:29:12.534 ======================================================== 00:29:12.534 Latency(us) 00:29:12.534 Device Information : IOPS MiB/s Average min max 00:29:12.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 116.81 0.46 8765.39 171.35 46154.40 00:29:12.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.92 0.20 19796.12 7961.99 51878.00 00:29:12.534 ======================================================== 00:29:12.534 Total : 167.72 0.66 12114.00 171.35 51878.00 00:29:12.534 00:29:12.534 09:07:34 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:12.534 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.914 Initializing NVMe Controllers 00:29:13.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:13.914 Initialization complete. Launching workers. 00:29:13.914 ======================================================== 00:29:13.914 Latency(us) 00:29:13.914 Device Information : IOPS MiB/s Average min max 00:29:13.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7575.00 29.59 4237.66 748.45 12324.70 00:29:13.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3771.00 14.73 8726.82 4935.30 55857.83 00:29:13.914 ======================================================== 00:29:13.914 Total : 11346.00 44.32 5729.69 748.45 55857.83 00:29:13.914 00:29:13.914 09:07:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:13.914 09:07:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:13.914 09:07:36 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:13.914 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.456 Initializing NVMe Controllers 00:29:16.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.456 Controller IO queue size 128, less than required. 00:29:16.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.456 Controller IO queue size 128, less than required. 00:29:16.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:16.456 Initialization complete. Launching workers. 00:29:16.456 ======================================================== 00:29:16.456 Latency(us) 00:29:16.456 Device Information : IOPS MiB/s Average min max 00:29:16.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 943.84 235.96 139557.54 94362.26 230263.02 00:29:16.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.30 142.33 234309.24 78302.67 327835.49 00:29:16.456 ======================================================== 00:29:16.456 Total : 1513.14 378.29 175206.69 78302.67 327835.49 00:29:16.456 00:29:16.456 09:07:38 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:16.456 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.716 No valid NVMe controllers or AIO or URING devices found 00:29:16.716 Initializing NVMe Controllers 00:29:16.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.716 Controller IO queue size 128, less than required. 00:29:16.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.716 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:16.716 Controller IO queue size 128, less than required. 00:29:16.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.716 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:16.716 WARNING: Some requested NVMe devices were skipped 00:29:16.716 09:07:39 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:16.716 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.260 Initializing NVMe Controllers 00:29:19.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.261 Controller IO queue size 128, less than required. 00:29:19.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.261 Controller IO queue size 128, less than required. 00:29:19.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:19.261 Initialization complete. Launching workers. 00:29:19.261 00:29:19.261 ==================== 00:29:19.261 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:19.261 TCP transport: 00:29:19.261 polls: 42766 00:29:19.261 idle_polls: 16761 00:29:19.261 sock_completions: 26005 00:29:19.261 nvme_completions: 3421 00:29:19.261 submitted_requests: 5108 00:29:19.261 queued_requests: 1 00:29:19.261 00:29:19.261 ==================== 00:29:19.261 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:19.261 TCP transport: 00:29:19.261 polls: 44984 00:29:19.261 idle_polls: 17762 00:29:19.261 sock_completions: 27222 00:29:19.261 nvme_completions: 3579 00:29:19.261 submitted_requests: 5400 00:29:19.261 queued_requests: 1 00:29:19.261 ======================================================== 00:29:19.261 Latency(us) 00:29:19.261 Device Information : IOPS MiB/s Average min max 00:29:19.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 855.00 213.75 154052.59 81368.38 280188.48 00:29:19.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 894.50 223.62 149078.02 77882.32 260610.89 00:29:19.261 ======================================================== 00:29:19.261 Total : 1749.49 437.37 151509.14 77882.32 280188.48 00:29:19.261 00:29:19.261 09:07:41 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:19.261 09:07:41 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.521 09:07:41 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:19.521 09:07:41 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:29:19.521 09:07:41 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:20.462 09:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=21f73ad6-4db7-4b44-9a0b-89edc47d7e48 00:29:20.462 09:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 21f73ad6-4db7-4b44-9a0b-89edc47d7e48 00:29:20.462 09:07:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=21f73ad6-4db7-4b44-9a0b-89edc47d7e48 00:29:20.462 09:07:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:29:20.462 09:07:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:29:20.462 09:07:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:29:20.462 09:07:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:29:20.722 { 00:29:20.722 "uuid": "21f73ad6-4db7-4b44-9a0b-89edc47d7e48", 00:29:20.722 "name": "lvs_0", 00:29:20.722 "base_bdev": "Nvme0n1", 00:29:20.722 "total_data_clusters": 457407, 00:29:20.722 "free_clusters": 457407, 00:29:20.722 "block_size": 512, 00:29:20.722 "cluster_size": 4194304 00:29:20.722 } 00:29:20.722 ]' 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="21f73ad6-4db7-4b44-9a0b-89edc47d7e48") .free_clusters' 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=457407 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="21f73ad6-4db7-4b44-9a0b-89edc47d7e48") .cluster_size' 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=1829628 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 1829628 00:29:20.722 1829628 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:20.722 09:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21f73ad6-4db7-4b44-9a0b-89edc47d7e48 lbd_0 20480 00:29:20.981 09:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=283447c5-41f7-4a78-a2bb-0331979af06b 00:29:20.981 09:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 283447c5-41f7-4a78-a2bb-0331979af06b lvs_n_0 00:29:22.432 09:07:44 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=bf2403b2-abfc-45b4-85a2-6fab15a61bfa 00:29:22.432 09:07:44 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb bf2403b2-abfc-45b4-85a2-6fab15a61bfa 00:29:22.432 09:07:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=bf2403b2-abfc-45b4-85a2-6fab15a61bfa 00:29:22.432 09:07:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:29:22.432 09:07:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:29:22.432 09:07:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:29:22.432 09:07:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:29:22.693 { 00:29:22.693 "uuid": "21f73ad6-4db7-4b44-9a0b-89edc47d7e48", 00:29:22.693 "name": "lvs_0", 00:29:22.693 "base_bdev": "Nvme0n1", 00:29:22.693 "total_data_clusters": 457407, 00:29:22.693 "free_clusters": 452287, 00:29:22.693 "block_size": 512, 00:29:22.693 "cluster_size": 4194304 00:29:22.693 }, 00:29:22.693 { 00:29:22.693 "uuid": "bf2403b2-abfc-45b4-85a2-6fab15a61bfa", 00:29:22.693 "name": "lvs_n_0", 00:29:22.693 "base_bdev": "283447c5-41f7-4a78-a2bb-0331979af06b", 00:29:22.693 "total_data_clusters": 5114, 00:29:22.693 "free_clusters": 5114, 00:29:22.693 "block_size": 512, 00:29:22.693 "cluster_size": 4194304 00:29:22.693 } 00:29:22.693 ]' 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="bf2403b2-abfc-45b4-85a2-6fab15a61bfa") .free_clusters' 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=5114 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="bf2403b2-abfc-45b4-85a2-6fab15a61bfa") .cluster_size' 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=20456 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 20456 00:29:22.693 20456 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:22.693 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bf2403b2-abfc-45b4-85a2-6fab15a61bfa lbd_nest_0 20456 00:29:22.954 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=0460c0c2-4475-4a88-b97b-f6e1412237ee 00:29:22.954 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.215 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:23.215 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 0460c0c2-4475-4a88-b97b-f6e1412237ee 00:29:23.215 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.477 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:23.477 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:23.477 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:23.477 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:23.477 09:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.477 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.734 Initializing NVMe Controllers 00:29:35.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:35.734 Initialization complete. Launching workers. 00:29:35.734 ======================================================== 00:29:35.734 Latency(us) 00:29:35.735 Device Information : IOPS MiB/s Average min max 00:29:35.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.30 0.02 21647.54 466.32 45474.75 00:29:35.735 ======================================================== 00:29:35.735 Total : 46.30 0.02 21647.54 466.32 45474.75 00:29:35.735 00:29:35.735 09:07:56 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:35.735 09:07:56 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:35.735 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.726 Initializing NVMe Controllers 00:29:45.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:45.726 Initialization complete. Launching workers. 00:29:45.726 ======================================================== 00:29:45.726 Latency(us) 00:29:45.726 Device Information : IOPS MiB/s Average min max 00:29:45.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.50 9.69 12909.93 6857.80 54871.15 00:29:45.726 ======================================================== 00:29:45.726 Total : 77.50 9.69 12909.93 6857.80 54871.15 00:29:45.726 00:29:45.726 09:08:06 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:45.726 09:08:06 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:45.726 09:08:06 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:45.726 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.729 Initializing NVMe Controllers 00:29:55.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.729 Initialization complete. Launching workers. 00:29:55.729 ======================================================== 00:29:55.729 Latency(us) 00:29:55.729 Device Information : IOPS MiB/s Average min max 00:29:55.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8473.10 4.14 3776.81 412.74 11148.29 00:29:55.729 ======================================================== 00:29:55.729 Total : 8473.10 4.14 3776.81 412.74 11148.29 00:29:55.729 00:29:55.729 09:08:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:55.729 09:08:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.729 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.719 Initializing NVMe Controllers 00:30:05.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.719 Initialization complete. Launching workers. 00:30:05.719 ======================================================== 00:30:05.719 Latency(us) 00:30:05.720 Device Information : IOPS MiB/s Average min max 00:30:05.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1766.30 220.79 18156.94 1008.00 41386.34 00:30:05.720 ======================================================== 00:30:05.720 Total : 1766.30 220.79 18156.94 1008.00 41386.34 00:30:05.720 00:30:05.720 09:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:05.720 09:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:05.720 09:08:27 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.720 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.723 Initializing NVMe Controllers 00:30:15.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.723 Controller IO queue size 128, less than required. 00:30:15.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:15.723 Initialization complete. Launching workers. 00:30:15.723 ======================================================== 00:30:15.723 Latency(us) 00:30:15.723 Device Information : IOPS MiB/s Average min max 00:30:15.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15878.60 7.75 8065.76 1953.86 48444.60 00:30:15.723 ======================================================== 00:30:15.723 Total : 15878.60 7.75 8065.76 1953.86 48444.60 00:30:15.723 00:30:15.723 09:08:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:15.724 09:08:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.724 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.727 Initializing NVMe Controllers 00:30:25.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.727 Controller IO queue size 128, less than required. 00:30:25.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.727 Initialization complete. Launching workers. 00:30:25.727 ======================================================== 00:30:25.727 Latency(us) 00:30:25.727 Device Information : IOPS MiB/s Average min max 00:30:25.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1143.50 142.94 113246.32 22449.61 243701.84 00:30:25.727 ======================================================== 00:30:25.727 Total : 1143.50 142.94 113246.32 22449.61 243701.84 00:30:25.727 00:30:25.727 09:08:47 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:25.727 09:08:48 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0460c0c2-4475-4a88-b97b-f6e1412237ee 00:30:27.668 09:08:49 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:27.668 09:08:49 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 283447c5-41f7-4a78-a2bb-0331979af06b 00:30:27.668 09:08:50 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:27.928 rmmod nvme_tcp 00:30:27.928 rmmod nvme_fabrics 00:30:27.928 rmmod nvme_keyring 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2742139 ']' 00:30:27.928 09:08:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2742139 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 2742139 ']' 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 2742139 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2742139 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2742139' 00:30:27.929 killing process with pid 2742139 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 2742139 00:30:27.929 09:08:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 2742139 00:30:29.843 09:08:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:29.843 09:08:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:29.843 09:08:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:29.843 09:08:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:29.843 09:08:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:29.843 09:08:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.844 09:08:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.844 09:08:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.391 09:08:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:32.391 00:30:32.391 real 1m31.930s 00:30:32.391 user 5m26.842s 00:30:32.391 sys 0m12.903s 00:30:32.391 09:08:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:32.391 09:08:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:32.391 ************************************ 00:30:32.391 END TEST nvmf_perf 00:30:32.391 ************************************ 00:30:32.391 09:08:54 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:32.391 09:08:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:32.391 09:08:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:32.391 09:08:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.391 ************************************ 00:30:32.391 START TEST nvmf_fio_host 00:30:32.391 ************************************ 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:32.391 * Looking for test storage... 00:30:32.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.391 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:32.392 09:08:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.987 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:38.988 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:38.988 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:38.988 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:38.988 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.988 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.255 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.255 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.255 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:39.255 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.255 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.255 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:39.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:30:39.518 00:30:39.518 --- 10.0.0.2 ping statistics --- 00:30:39.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.518 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:30:39.518 00:30:39.518 --- 10.0.0.1 ping statistics --- 00:30:39.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.518 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2762233 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2762233 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 2762233 ']' 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:39.518 09:09:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.518 [2024-06-09 09:09:01.953102] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:30:39.518 [2024-06-09 09:09:01.953167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.518 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.518 [2024-06-09 09:09:02.024020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:39.779 [2024-06-09 09:09:02.101969] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.779 [2024-06-09 09:09:02.102006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.779 [2024-06-09 09:09:02.102014] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.779 [2024-06-09 09:09:02.102020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.779 [2024-06-09 09:09:02.102027] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.779 [2024-06-09 09:09:02.102193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.779 [2024-06-09 09:09:02.102307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.779 [2024-06-09 09:09:02.102462] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.779 [2024-06-09 09:09:02.102461] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.351 09:09:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:40.351 09:09:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:30:40.351 09:09:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:40.351 [2024-06-09 09:09:02.885289] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.613 09:09:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:40.613 09:09:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:40.613 09:09:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.613 09:09:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:40.613 Malloc1 00:30:40.613 09:09:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:40.874 09:09:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:41.135 09:09:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.135 [2024-06-09 09:09:03.619172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.135 09:09:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:41.397 09:09:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:41.658 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:41.658 fio-3.35 00:30:41.658 Starting 1 thread 00:30:41.658 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.207 00:30:44.207 test: (groupid=0, jobs=1): err= 0: pid=2762845: Sun Jun 9 09:09:06 2024 00:30:44.207 read: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(110MiB/2004msec) 00:30:44.207 slat (usec): min=2, max=278, avg= 2.20, stdev= 2.31 00:30:44.207 clat (usec): min=3262, max=12808, avg=5226.70, stdev=824.73 00:30:44.207 lat (usec): min=3264, max=12821, avg=5228.90, stdev=824.98 00:30:44.207 clat percentiles (usec): 00:30:44.207 | 1.00th=[ 3851], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4686], 00:30:44.207 | 30.00th=[ 4817], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5211], 00:30:44.207 | 70.00th=[ 5407], 80.00th=[ 5669], 90.00th=[ 6194], 95.00th=[ 6718], 00:30:44.207 | 99.00th=[ 8094], 99.50th=[ 8586], 99.90th=[11600], 99.95th=[12256], 00:30:44.207 | 99.99th=[12780] 00:30:44.207 bw ( KiB/s): min=54064, max=56984, per=99.91%, avg=55966.00, stdev=1297.34, samples=4 00:30:44.207 iops : min=13516, max=14246, avg=13991.50, stdev=324.33, samples=4 00:30:44.207 write: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(110MiB/2004msec); 0 zone resets 00:30:44.207 slat (usec): min=2, max=258, avg= 2.30, stdev= 1.73 00:30:44.207 clat (usec): min=1956, max=11555, avg=3872.83, stdev=577.62 00:30:44.207 lat (usec): min=1959, max=11587, avg=3875.13, stdev=578.00 00:30:44.207 clat percentiles (usec): 00:30:44.207 | 1.00th=[ 2606], 5.00th=[ 2966], 10.00th=[ 3228], 20.00th=[ 3490], 00:30:44.207 | 30.00th=[ 3654], 40.00th=[ 3785], 50.00th=[ 3916], 60.00th=[ 4015], 00:30:44.207 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:30:44.207 | 99.00th=[ 5211], 99.50th=[ 5669], 99.90th=[10421], 99.95th=[10945], 00:30:44.207 | 99.99th=[11338] 00:30:44.207 bw ( KiB/s): min=54384, max=56816, per=100.00%, avg=56046.00, stdev=1120.58, samples=4 00:30:44.207 iops : min=13596, max=14204, avg=14011.50, stdev=280.14, samples=4 00:30:44.207 lat (msec) : 2=0.01%, 4=30.75%, 10=69.04%, 20=0.20% 00:30:44.207 cpu : usr=69.85%, sys=22.52%, ctx=23, majf=0, minf=5 00:30:44.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:44.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:44.207 issued rwts: total=28063,28070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:44.207 00:30:44.207 Run status group 0 (all jobs): 00:30:44.207 READ: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=110MiB (115MB), run=2004-2004msec 00:30:44.207 WRITE: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=110MiB (115MB), run=2004-2004msec 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:44.207 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:44.208 09:09:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:44.469 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:44.469 fio-3.35 00:30:44.469 Starting 1 thread 00:30:44.469 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.016 00:30:47.016 test: (groupid=0, jobs=1): err= 0: pid=2763567: Sun Jun 9 09:09:09 2024 00:30:47.016 read: IOPS=8670, BW=135MiB/s (142MB/s)(272MiB/2011msec) 00:30:47.016 slat (usec): min=3, max=109, avg= 3.69, stdev= 1.66 00:30:47.016 clat (usec): min=2351, max=29020, avg=9128.20, stdev=2582.23 00:30:47.016 lat (usec): min=2355, max=29024, avg=9131.88, stdev=2582.56 00:30:47.016 clat percentiles (usec): 00:30:47.016 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7046], 00:30:47.016 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9372], 00:30:47.016 | 70.00th=[10159], 80.00th=[11207], 90.00th=[12256], 95.00th=[13698], 00:30:47.016 | 99.00th=[17171], 99.50th=[19006], 99.90th=[23200], 99.95th=[23200], 00:30:47.016 | 99.99th=[23462] 00:30:47.016 bw ( KiB/s): min=61184, max=76352, per=50.78%, avg=70440.00, stdev=7110.50, samples=4 00:30:47.016 iops : min= 3824, max= 4772, avg=4402.50, stdev=444.41, samples=4 00:30:47.016 write: IOPS=5132, BW=80.2MiB/s (84.1MB/s)(143MiB/1782msec); 0 zone resets 00:30:47.016 slat (usec): min=40, max=444, avg=41.29, stdev= 8.68 00:30:47.016 clat (usec): min=2663, max=23767, avg=9861.44, stdev=2075.53 00:30:47.016 lat (usec): min=2704, max=23811, avg=9902.73, stdev=2078.65 00:30:47.016 clat percentiles (usec): 00:30:47.016 | 1.00th=[ 6587], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8455], 00:30:47.016 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:30:47.016 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11863], 95.00th=[12780], 00:30:47.016 | 99.00th=[18482], 99.50th=[23200], 99.90th=[23462], 99.95th=[23462], 00:30:47.016 | 99.99th=[23725] 00:30:47.016 bw ( KiB/s): min=64832, max=79520, per=89.10%, avg=73176.00, stdev=6840.69, samples=4 00:30:47.016 iops : min= 4052, max= 4970, avg=4573.50, stdev=427.54, samples=4 00:30:47.016 lat (msec) : 4=0.31%, 10=64.72%, 20=34.49%, 50=0.48% 00:30:47.016 cpu : usr=82.59%, sys=12.64%, ctx=12, majf=0, minf=18 00:30:47.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:30:47.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:47.017 issued rwts: total=17436,9147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:47.017 00:30:47.017 Run status group 0 (all jobs): 00:30:47.017 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=272MiB (286MB), run=2011-2011msec 00:30:47.017 WRITE: bw=80.2MiB/s (84.1MB/s), 80.2MiB/s-80.2MiB/s (84.1MB/s-84.1MB/s), io=143MiB (150MB), run=1782-1782msec 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=() 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # local bdfs 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:30:47.017 09:09:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:30:47.589 Nvme0n1 00:30:47.589 09:09:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:48.161 09:09:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=90b7515b-7015-47cd-a268-3d8cdda296a7 00:30:48.161 09:09:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 90b7515b-7015-47cd-a268-3d8cdda296a7 00:30:48.161 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=90b7515b-7015-47cd-a268-3d8cdda296a7 00:30:48.161 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:30:48.161 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:30:48.161 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:30:48.161 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:48.421 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:30:48.421 { 00:30:48.421 "uuid": "90b7515b-7015-47cd-a268-3d8cdda296a7", 00:30:48.421 "name": "lvs_0", 00:30:48.421 "base_bdev": "Nvme0n1", 00:30:48.421 "total_data_clusters": 1787, 00:30:48.421 "free_clusters": 1787, 00:30:48.421 "block_size": 512, 00:30:48.421 "cluster_size": 1073741824 00:30:48.421 } 00:30:48.421 ]' 00:30:48.421 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="90b7515b-7015-47cd-a268-3d8cdda296a7") .free_clusters' 00:30:48.421 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=1787 00:30:48.421 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="90b7515b-7015-47cd-a268-3d8cdda296a7") .cluster_size' 00:30:48.421 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=1073741824 00:30:48.421 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1829888 00:30:48.421 09:09:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1829888 00:30:48.421 1829888 00:30:48.421 09:09:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:30:48.681 1ada4154-477f-47b8-8709-dcc7adf50364 00:30:48.681 09:09:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:48.681 09:09:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:48.943 09:09:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:49.204 09:09:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:49.464 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:49.464 fio-3.35 00:30:49.464 Starting 1 thread 00:30:49.464 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.083 00:30:52.083 test: (groupid=0, jobs=1): err= 0: pid=2765120: Sun Jun 9 09:09:14 2024 00:30:52.083 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2005msec) 00:30:52.083 slat (usec): min=2, max=106, avg= 2.29, stdev= 1.02 00:30:52.083 clat (usec): min=3167, max=12689, avg=6948.66, stdev=922.18 00:30:52.083 lat (usec): min=3186, max=12691, avg=6950.95, stdev=922.17 00:30:52.083 clat percentiles (usec): 00:30:52.083 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6259], 00:30:52.083 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7046], 00:30:52.083 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[ 8586], 00:30:52.083 | 99.00th=[10159], 99.50th=[10814], 99.90th=[11731], 99.95th=[12125], 00:30:52.083 | 99.99th=[12649] 00:30:52.083 bw ( KiB/s): min=39968, max=42440, per=99.90%, avg=41532.00, stdev=1077.92, samples=4 00:30:52.083 iops : min= 9992, max=10610, avg=10383.00, stdev=269.48, samples=4 00:30:52.083 write: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2005msec); 0 zone resets 00:30:52.083 slat (nsec): min=2162, max=93767, avg=2389.64, stdev=704.37 00:30:52.083 clat (usec): min=2054, max=9832, avg=5254.97, stdev=645.10 00:30:52.083 lat (usec): min=2062, max=9834, avg=5257.36, stdev=645.11 00:30:52.083 clat percentiles (usec): 00:30:52.083 | 1.00th=[ 3589], 5.00th=[ 4178], 10.00th=[ 4424], 20.00th=[ 4752], 00:30:52.083 | 30.00th=[ 4948], 40.00th=[ 5145], 50.00th=[ 5276], 60.00th=[ 5407], 00:30:52.083 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5997], 95.00th=[ 6259], 00:30:52.083 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7570], 99.95th=[ 9110], 00:30:52.083 | 99.99th=[ 9765] 00:30:52.083 bw ( KiB/s): min=40544, max=42208, per=99.99%, avg=41594.00, stdev=726.12, samples=4 00:30:52.083 iops : min=10136, max=10552, avg=10398.50, stdev=181.53, samples=4 00:30:52.083 lat (msec) : 4=1.62%, 10=97.86%, 20=0.52% 00:30:52.083 cpu : usr=65.82%, sys=27.64%, ctx=30, majf=0, minf=5 00:30:52.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:52.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:52.083 issued rwts: total=20839,20851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:52.083 00:30:52.083 Run status group 0 (all jobs): 00:30:52.083 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.4MB), run=2005-2005msec 00:30:52.083 WRITE: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.4MB), run=2005-2005msec 00:30:52.083 09:09:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:52.083 09:09:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:53.025 09:09:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=76028ac0-16a6-45fb-bd5a-5e618b6780bf 00:30:53.025 09:09:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 76028ac0-16a6-45fb-bd5a-5e618b6780bf 00:30:53.025 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=76028ac0-16a6-45fb-bd5a-5e618b6780bf 00:30:53.025 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:30:53.025 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:30:53.025 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:30:53.025 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:53.025 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:30:53.025 { 00:30:53.025 "uuid": "90b7515b-7015-47cd-a268-3d8cdda296a7", 00:30:53.025 "name": "lvs_0", 00:30:53.025 "base_bdev": "Nvme0n1", 00:30:53.025 "total_data_clusters": 1787, 00:30:53.025 "free_clusters": 0, 00:30:53.025 "block_size": 512, 00:30:53.025 "cluster_size": 1073741824 00:30:53.025 }, 00:30:53.025 { 00:30:53.025 "uuid": "76028ac0-16a6-45fb-bd5a-5e618b6780bf", 00:30:53.025 "name": "lvs_n_0", 00:30:53.025 "base_bdev": "1ada4154-477f-47b8-8709-dcc7adf50364", 00:30:53.025 "total_data_clusters": 457025, 00:30:53.026 "free_clusters": 457025, 00:30:53.026 "block_size": 512, 00:30:53.026 "cluster_size": 4194304 00:30:53.026 } 00:30:53.026 ]' 00:30:53.026 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="76028ac0-16a6-45fb-bd5a-5e618b6780bf") .free_clusters' 00:30:53.026 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=457025 00:30:53.026 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="76028ac0-16a6-45fb-bd5a-5e618b6780bf") .cluster_size' 00:30:53.286 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=4194304 00:30:53.286 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1828100 00:30:53.286 09:09:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1828100 00:30:53.286 1828100 00:30:53.286 09:09:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:30:54.228 5267307e-d61e-492f-855c-230ee4e7affc 00:30:54.228 09:09:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:54.489 09:09:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:54.489 09:09:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:54.750 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:54.751 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:54.751 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:54.751 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:54.751 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:54.751 09:09:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:55.011 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:55.011 fio-3.35 00:30:55.011 Starting 1 thread 00:30:55.011 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.561 00:30:57.561 test: (groupid=0, jobs=1): err= 0: pid=2766366: Sun Jun 9 09:09:19 2024 00:30:57.561 read: IOPS=9218, BW=36.0MiB/s (37.8MB/s)(72.2MiB/2005msec) 00:30:57.561 slat (usec): min=2, max=107, avg= 2.27, stdev= 1.06 00:30:57.561 clat (usec): min=3812, max=17039, avg=7877.76, stdev=1169.80 00:30:57.561 lat (usec): min=3827, max=17041, avg=7880.03, stdev=1169.79 00:30:57.561 clat percentiles (usec): 00:30:57.561 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 6652], 20.00th=[ 7046], 00:30:57.561 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7963], 00:30:57.561 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10290], 00:30:57.561 | 99.00th=[11731], 99.50th=[12387], 99.90th=[13698], 99.95th=[14353], 00:30:57.561 | 99.99th=[16450] 00:30:57.561 bw ( KiB/s): min=35104, max=37496, per=99.85%, avg=36820.00, stdev=1149.27, samples=4 00:30:57.561 iops : min= 8776, max= 9374, avg=9205.00, stdev=287.32, samples=4 00:30:57.561 write: IOPS=9224, BW=36.0MiB/s (37.8MB/s)(72.2MiB/2005msec); 0 zone resets 00:30:57.561 slat (nsec): min=2169, max=99122, avg=2366.35, stdev=773.36 00:30:57.561 clat (usec): min=1579, max=10727, avg=5898.07, stdev=807.49 00:30:57.561 lat (usec): min=1586, max=10730, avg=5900.44, stdev=807.51 00:30:57.561 clat percentiles (usec): 00:30:57.561 | 1.00th=[ 3720], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 5276], 00:30:57.561 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6128], 00:30:57.561 | 70.00th=[ 6325], 80.00th=[ 6521], 90.00th=[ 6783], 95.00th=[ 7111], 00:30:57.561 | 99.00th=[ 7832], 99.50th=[ 8094], 99.90th=[ 9372], 99.95th=[ 9896], 00:30:57.561 | 99.99th=[10683] 00:30:57.561 bw ( KiB/s): min=36072, max=37376, per=99.96%, avg=36882.00, stdev=602.62, samples=4 00:30:57.561 iops : min= 9018, max= 9344, avg=9220.50, stdev=150.66, samples=4 00:30:57.561 lat (msec) : 2=0.01%, 4=0.98%, 10=95.97%, 20=3.05% 00:30:57.561 cpu : usr=66.12%, sys=28.09%, ctx=26, majf=0, minf=5 00:30:57.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:57.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:57.561 issued rwts: total=18483,18495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:57.561 00:30:57.561 Run status group 0 (all jobs): 00:30:57.561 READ: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.2MiB (75.7MB), run=2005-2005msec 00:30:57.561 WRITE: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.2MiB (75.8MB), run=2005-2005msec 00:30:57.561 09:09:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:57.561 09:09:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:57.561 09:09:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:00.110 09:09:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:00.110 09:09:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:00.371 09:09:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:00.632 09:09:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:02.547 09:09:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:02.547 09:09:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:02.547 09:09:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:02.547 09:09:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:02.547 09:09:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:02.547 rmmod nvme_tcp 00:31:02.547 rmmod nvme_fabrics 00:31:02.547 rmmod nvme_keyring 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2762233 ']' 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2762233 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 2762233 ']' 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 2762233 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:02.547 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2762233 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2762233' 00:31:02.808 killing process with pid 2762233 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 2762233 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 2762233 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.808 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:02.809 09:09:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.357 09:09:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.357 00:31:05.357 real 0m32.799s 00:31:05.357 user 2m45.628s 00:31:05.357 sys 0m9.629s 00:31:05.357 09:09:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:05.357 09:09:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.357 ************************************ 00:31:05.357 END TEST nvmf_fio_host 00:31:05.357 ************************************ 00:31:05.357 09:09:27 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:05.357 09:09:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:05.357 09:09:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:05.357 09:09:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.357 ************************************ 00:31:05.357 START TEST nvmf_failover 00:31:05.357 ************************************ 00:31:05.357 09:09:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:05.357 * Looking for test storage... 00:31:05.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.357 09:09:27 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.357 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:05.357 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.357 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.357 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.357 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.357 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.358 09:09:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:12.018 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:12.019 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:12.019 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:12.019 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:12.019 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:12.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.805 ms 00:31:12.019 00:31:12.019 --- 10.0.0.2 ping statistics --- 00:31:12.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.019 rtt min/avg/max/mdev = 0.805/0.805/0.805/0.000 ms 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:31:12.019 00:31:12.019 --- 10.0.0.1 ping statistics --- 00:31:12.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.019 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2771833 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2771833 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2771833 ']' 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:12.019 09:09:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.019 [2024-06-09 09:09:34.574259] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:12.019 [2024-06-09 09:09:34.574307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.280 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.280 [2024-06-09 09:09:34.653733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:12.280 [2024-06-09 09:09:34.719318] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.280 [2024-06-09 09:09:34.719354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.280 [2024-06-09 09:09:34.719362] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.280 [2024-06-09 09:09:34.719368] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.280 [2024-06-09 09:09:34.719373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.280 [2024-06-09 09:09:34.719476] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.280 [2024-06-09 09:09:34.719640] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.280 [2024-06-09 09:09:34.719730] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.851 09:09:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:12.851 09:09:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:31:12.851 09:09:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:12.851 09:09:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:12.851 09:09:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.851 09:09:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.851 09:09:35 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:13.112 [2024-06-09 09:09:35.511130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.112 09:09:35 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:13.373 Malloc0 00:31:13.373 09:09:35 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:13.373 09:09:35 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:13.634 09:09:36 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.895 [2024-06-09 09:09:36.208425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.895 09:09:36 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:13.895 [2024-06-09 09:09:36.368831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:13.895 09:09:36 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:14.157 [2024-06-09 09:09:36.529306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2772202 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2772202 /var/tmp/bdevperf.sock 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2772202 ']' 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:14.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:14.157 09:09:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:15.101 09:09:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:15.101 09:09:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:31:15.101 09:09:37 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.361 NVMe0n1 00:31:15.361 09:09:37 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.622 00:31:15.622 09:09:38 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2772538 00:31:15.622 09:09:38 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:15.622 09:09:38 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:16.590 09:09:39 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.590 [2024-06-09 09:09:39.146616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146662] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146690] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146742] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146759] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146768] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146772] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146776] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146785] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146794] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146826] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146835] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.590 [2024-06-09 09:09:39.146857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad0e0 is same with the state(5) to be set 00:31:16.851 09:09:39 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:20.155 09:09:42 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:20.155 00:31:20.155 09:09:42 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:20.155 [2024-06-09 09:09:42.636054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636164] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636186] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636203] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636266] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636270] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636275] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636306] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636351] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636361] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636367] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.155 [2024-06-09 09:09:42.636381] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636405] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636410] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636432] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636462] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636467] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636484] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636488] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636497] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636501] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 [2024-06-09 09:09:42.636506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10add40 is same with the state(5) to be set 00:31:20.156 09:09:42 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:23.463 09:09:45 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.463 [2024-06-09 09:09:45.813117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.463 09:09:45 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:24.406 09:09:46 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:24.669 [2024-06-09 09:09:46.985163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985237] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985246] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985277] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985298] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985308] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985313] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985336] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985344] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985348] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985361] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985375] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985380] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985393] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985397] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985405] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985410] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985419] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985428] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985435] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985461] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985478] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985492] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.669 [2024-06-09 09:09:46.985496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985502] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985509] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985514] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985519] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985526] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985535] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985554] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985581] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985586] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985590] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985595] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985605] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985610] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985614] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985655] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985675] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985680] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985693] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 [2024-06-09 09:09:46.985716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04b80 is same with the state(5) to be set 00:31:24.670 09:09:47 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2772538 00:31:31.263 0 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2772202 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2772202 ']' 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2772202 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2772202 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2772202' 00:31:31.263 killing process with pid 2772202 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2772202 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2772202 00:31:31.263 09:09:53 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:31.263 [2024-06-09 09:09:36.597407] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:31.263 [2024-06-09 09:09:36.597463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2772202 ] 00:31:31.263 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.263 [2024-06-09 09:09:36.655862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.263 [2024-06-09 09:09:36.719649] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.263 Running I/O for 15 seconds... 00:31:31.263 [2024-06-09 09:09:39.147324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.263 [2024-06-09 09:09:39.147358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.263 [2024-06-09 09:09:39.147384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.263 [2024-06-09 09:09:39.147406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.263 [2024-06-09 09:09:39.147423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.263 [2024-06-09 09:09:39.147440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.263 [2024-06-09 09:09:39.147456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.263 [2024-06-09 09:09:39.147473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.263 [2024-06-09 09:09:39.147491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.263 [2024-06-09 09:09:39.147676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.263 [2024-06-09 09:09:39.147685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.264 [2024-06-09 09:09:39.147957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.147976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.147985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.147992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.264 [2024-06-09 09:09:39.148329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.264 [2024-06-09 09:09:39.148336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.265 [2024-06-09 09:09:39.148990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.265 [2024-06-09 09:09:39.148997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:39.149459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.266 [2024-06-09 09:09:39.149485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.266 [2024-06-09 09:09:39.149492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101840 len:8 PRP1 0x0 PRP2 0x0 00:31:31.266 [2024-06-09 09:09:39.149499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149535] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x155fd80 was disconnected and freed. reset controller. 00:31:31.266 [2024-06-09 09:09:39.149544] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:31.266 [2024-06-09 09:09:39.149562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.266 [2024-06-09 09:09:39.149570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.266 [2024-06-09 09:09:39.149585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.266 [2024-06-09 09:09:39.149601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.266 [2024-06-09 09:09:39.149616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:39.149623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:31.266 [2024-06-09 09:09:39.153221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:31.266 [2024-06-09 09:09:39.153247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1540cf0 (9): Bad file descriptor 00:31:31.266 [2024-06-09 09:09:39.230270] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:31.266 [2024-06-09 09:09:42.637316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:42.637353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:42.637369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:42.637382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:42.637393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.266 [2024-06-09 09:09:42.637400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.266 [2024-06-09 09:09:42.637417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.267 [2024-06-09 09:09:42.637956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.267 [2024-06-09 09:09:42.637964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.637973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.637980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.637990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.637997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.268 [2024-06-09 09:09:42.638282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.268 [2024-06-09 09:09:42.638299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.268 [2024-06-09 09:09:42.638634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.268 [2024-06-09 09:09:42.638642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.269 [2024-06-09 09:09:42.638877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.638895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.638912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.638928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.638945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.638962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.638978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.638988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.638996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.269 [2024-06-09 09:09:42.639311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.269 [2024-06-09 09:09:42.639321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.270 [2024-06-09 09:09:42.639328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.270 [2024-06-09 09:09:42.639345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.270 [2024-06-09 09:09:42.639361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.270 [2024-06-09 09:09:42.639377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.270 [2024-06-09 09:09:42.639394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:42.639413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:42.639430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:42.639447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:42.639463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:42.639480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:42.639496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.270 [2024-06-09 09:09:42.639521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.270 [2024-06-09 09:09:42.639530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61176 len:8 PRP1 0x0 PRP2 0x0 00:31:31.270 [2024-06-09 09:09:42.639538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639573] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x170a0f0 was disconnected and freed. reset controller. 00:31:31.270 [2024-06-09 09:09:42.639583] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:31.270 [2024-06-09 09:09:42.639601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.270 [2024-06-09 09:09:42.639610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.270 [2024-06-09 09:09:42.639626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.270 [2024-06-09 09:09:42.639641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.270 [2024-06-09 09:09:42.639656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:42.639663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:31.270 [2024-06-09 09:09:42.643253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:31.270 [2024-06-09 09:09:42.643278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1540cf0 (9): Bad file descriptor 00:31:31.270 [2024-06-09 09:09:42.804774] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:31.270 [2024-06-09 09:09:46.988124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.270 [2024-06-09 09:09:46.988522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.270 [2024-06-09 09:09:46.988532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.988989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.988998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.271 [2024-06-09 09:09:46.989185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.271 [2024-06-09 09:09:46.989194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:31.272 [2024-06-09 09:09:46.989356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.272 [2024-06-09 09:09:46.989601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.272 [2024-06-09 09:09:46.989629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117568 len:8 PRP1 0x0 PRP2 0x0 00:31:31.272 [2024-06-09 09:09:46.989636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.272 [2024-06-09 09:09:46.989652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.272 [2024-06-09 09:09:46.989658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117576 len:8 PRP1 0x0 PRP2 0x0 00:31:31.272 [2024-06-09 09:09:46.989666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.272 [2024-06-09 09:09:46.989679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.272 [2024-06-09 09:09:46.989686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117584 len:8 PRP1 0x0 PRP2 0x0 00:31:31.272 [2024-06-09 09:09:46.989693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.272 [2024-06-09 09:09:46.989707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.272 [2024-06-09 09:09:46.989713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117592 len:8 PRP1 0x0 PRP2 0x0 00:31:31.272 [2024-06-09 09:09:46.989720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.272 [2024-06-09 09:09:46.989734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.272 [2024-06-09 09:09:46.989740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117600 len:8 PRP1 0x0 PRP2 0x0 00:31:31.272 [2024-06-09 09:09:46.989747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.272 [2024-06-09 09:09:46.989762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.272 [2024-06-09 09:09:46.989769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117608 len:8 PRP1 0x0 PRP2 0x0 00:31:31.272 [2024-06-09 09:09:46.989776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.272 [2024-06-09 09:09:46.989784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.272 [2024-06-09 09:09:46.989789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.989795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117616 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.989802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.989810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.989816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.989822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117624 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.989830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.989837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.989842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.989848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117632 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.989855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.989863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.989869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.989875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117640 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.989882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.989890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.989895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.989901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117648 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.989909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.989917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.989923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.989929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117656 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.989937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.989944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.989950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.989956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117664 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.989963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.989973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.989978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.989984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117672 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.989991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.989999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117680 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117688 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117696 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117704 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117712 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117720 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117728 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117736 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117744 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117752 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117760 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117768 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:46.990328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117776 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:46.990334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:46.990341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:46.990347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:47.001206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117784 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:47.001237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:47.001252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:47.001264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:47.001270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117792 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:47.001278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.273 [2024-06-09 09:09:47.001285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.273 [2024-06-09 09:09:47.001291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.273 [2024-06-09 09:09:47.001297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117800 len:8 PRP1 0x0 PRP2 0x0 00:31:31.273 [2024-06-09 09:09:47.001304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117808 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117816 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117824 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117832 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117840 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117848 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117856 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117864 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117872 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117880 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117888 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117896 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:31.274 [2024-06-09 09:09:47.001654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:31.274 [2024-06-09 09:09:47.001660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117904 len:8 PRP1 0x0 PRP2 0x0 00:31:31.274 [2024-06-09 09:09:47.001668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001707] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1561e00 was disconnected and freed. reset controller. 00:31:31.274 [2024-06-09 09:09:47.001717] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:31.274 [2024-06-09 09:09:47.001744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.274 [2024-06-09 09:09:47.001755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.274 [2024-06-09 09:09:47.001771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.274 [2024-06-09 09:09:47.001788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.274 [2024-06-09 09:09:47.001803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.274 [2024-06-09 09:09:47.001810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:31.274 [2024-06-09 09:09:47.001839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1540cf0 (9): Bad file descriptor 00:31:31.274 [2024-06-09 09:09:47.005407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:31.274 [2024-06-09 09:09:47.038877] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:31.274 00:31:31.274 Latency(us) 00:31:31.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.274 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:31.274 Verification LBA range: start 0x0 length 0x4000 00:31:31.274 NVMe0n1 : 15.01 11940.39 46.64 680.08 0.00 10114.51 1044.48 21736.11 00:31:31.274 =================================================================================================================== 00:31:31.274 Total : 11940.39 46.64 680.08 0.00 10114.51 1044.48 21736.11 00:31:31.274 Received shutdown signal, test time was about 15.000000 seconds 00:31:31.274 00:31:31.274 Latency(us) 00:31:31.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.274 =================================================================================================================== 00:31:31.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2775551 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2775551 /var/tmp/bdevperf.sock 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2775551 ']' 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:31.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:31.274 09:09:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:31.846 09:09:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:31.846 09:09:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:31:31.846 09:09:54 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:31.846 [2024-06-09 09:09:54.293885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:31.846 09:09:54 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:32.107 [2024-06-09 09:09:54.462274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:32.108 09:09:54 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:32.369 NVMe0n1 00:31:32.369 09:09:54 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:32.630 00:31:32.630 09:09:55 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:32.891 00:31:32.891 09:09:55 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:32.891 09:09:55 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:33.152 09:09:55 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:33.152 09:09:55 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:36.453 09:09:58 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:36.453 09:09:58 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:36.453 09:09:58 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:36.453 09:09:58 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2776566 00:31:36.453 09:09:58 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2776566 00:31:37.394 0 00:31:37.655 09:09:59 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:37.655 [2024-06-09 09:09:53.388072] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:37.655 [2024-06-09 09:09:53.388129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775551 ] 00:31:37.655 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.655 [2024-06-09 09:09:53.446607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.655 [2024-06-09 09:09:53.508832] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.655 [2024-06-09 09:09:55.652997] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:37.655 [2024-06-09 09:09:55.653041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.655 [2024-06-09 09:09:55.653053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.655 [2024-06-09 09:09:55.653061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.655 [2024-06-09 09:09:55.653069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.655 [2024-06-09 09:09:55.653076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.655 [2024-06-09 09:09:55.653083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.655 [2024-06-09 09:09:55.653091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.655 [2024-06-09 09:09:55.653098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.655 [2024-06-09 09:09:55.653105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.656 [2024-06-09 09:09:55.653132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.656 [2024-06-09 09:09:55.653146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dacf0 (9): Bad file descriptor 00:31:37.656 [2024-06-09 09:09:55.659747] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:37.656 Running I/O for 1 seconds... 00:31:37.656 00:31:37.656 Latency(us) 00:31:37.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.656 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:37.656 Verification LBA range: start 0x0 length 0x4000 00:31:37.656 NVMe0n1 : 1.01 11396.67 44.52 0.00 0.00 11173.32 2129.92 17039.36 00:31:37.656 =================================================================================================================== 00:31:37.656 Total : 11396.67 44.52 0.00 0.00 11173.32 2129.92 17039.36 00:31:37.656 09:09:59 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:37.656 09:09:59 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:37.656 09:10:00 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:37.917 09:10:00 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:37.917 09:10:00 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:37.917 09:10:00 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:38.177 09:10:00 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2775551 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2775551 ']' 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2775551 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2775551 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2775551' 00:31:41.478 killing process with pid 2775551 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2775551 00:31:41.478 09:10:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2775551 00:31:41.478 09:10:04 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:41.478 09:10:04 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:41.739 rmmod nvme_tcp 00:31:41.739 rmmod nvme_fabrics 00:31:41.739 rmmod nvme_keyring 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2771833 ']' 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2771833 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2771833 ']' 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2771833 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:41.739 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2771833 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2771833' 00:31:42.001 killing process with pid 2771833 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2771833 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2771833 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:42.001 09:10:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.941 09:10:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:43.941 00:31:43.941 real 0m39.067s 00:31:43.941 user 2m1.383s 00:31:43.941 sys 0m7.868s 00:31:43.942 09:10:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:43.942 09:10:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:43.942 ************************************ 00:31:43.942 END TEST nvmf_failover 00:31:43.942 ************************************ 00:31:44.203 09:10:06 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:44.203 09:10:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:44.203 09:10:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:44.203 09:10:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:44.203 ************************************ 00:31:44.203 START TEST nvmf_host_discovery 00:31:44.203 ************************************ 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:44.203 * Looking for test storage... 00:31:44.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:44.203 09:10:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:44.204 09:10:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:50.792 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:50.792 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:50.792 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:50.792 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.792 09:10:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:50.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:31:50.792 00:31:50.792 --- 10.0.0.2 ping statistics --- 00:31:50.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.792 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:31:50.792 00:31:50.792 --- 10.0.0.1 ping statistics --- 00:31:50.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.792 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:50.792 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2781575 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2781575 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 2781575 ']' 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.793 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:50.793 [2024-06-09 09:10:13.242800] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:50.793 [2024-06-09 09:10:13.242847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.793 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.793 [2024-06-09 09:10:13.324598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.053 [2024-06-09 09:10:13.388903] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.053 [2024-06-09 09:10:13.388939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.053 [2024-06-09 09:10:13.388947] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.053 [2024-06-09 09:10:13.388953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.054 [2024-06-09 09:10:13.388958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.054 [2024-06-09 09:10:13.388980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.625 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:51.625 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:31:51.625 09:10:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:51.625 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:51.625 09:10:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.625 [2024-06-09 09:10:14.043496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.625 [2024-06-09 09:10:14.051639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:51.625 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.626 null0 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.626 null1 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2781718 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2781718 /tmp/host.sock 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 2781718 ']' 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:51.626 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.626 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:51.626 [2024-06-09 09:10:14.136938] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:51.626 [2024-06-09 09:10:14.136986] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781718 ] 00:31:51.626 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.887 [2024-06-09 09:10:14.195322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.887 [2024-06-09 09:10:14.259825] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:52.459 09:10:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.459 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:52.459 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:52.459 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.460 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.721 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.722 [2024-06-09 09:10:15.250741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:52.722 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:52.983 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:31:52.984 09:10:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:31:53.556 [2024-06-09 09:10:15.952713] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:53.556 [2024-06-09 09:10:15.952738] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:53.556 [2024-06-09 09:10:15.952753] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:53.556 [2024-06-09 09:10:16.040017] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:53.817 [2024-06-09 09:10:16.144142] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:53.817 [2024-06-09 09:10:16.144165] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:54.079 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.341 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.603 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.603 [2024-06-09 09:10:16.995291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:54.603 [2024-06-09 09:10:16.996098] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:54.603 [2024-06-09 09:10:16.996123] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:54.604 09:10:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.604 09:10:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:54.604 [2024-06-09 09:10:17.085750] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.604 [2024-06-09 09:10:17.150541] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:54.604 [2024-06-09 09:10:17.150559] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:54.604 [2024-06-09 09:10:17.150565] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:54.604 09:10:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.991 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.992 [2024-06-09 09:10:18.247005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.992 [2024-06-09 09:10:18.247029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.992 [2024-06-09 09:10:18.247038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.992 [2024-06-09 09:10:18.247046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.992 [2024-06-09 09:10:18.247054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.992 [2024-06-09 09:10:18.247061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.992 [2024-06-09 09:10:18.247069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.992 [2024-06-09 09:10:18.247076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.992 [2024-06-09 09:10:18.247083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab51c0 is same with the state(5) to be set 00:31:55.992 [2024-06-09 09:10:18.247846] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:55.992 [2024-06-09 09:10:18.247861] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:31:55.992 [2024-06-09 09:10:18.257016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab51c0 (9): Bad file descriptor 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:55.992 [2024-06-09 09:10:18.267056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.992 [2024-06-09 09:10:18.267667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.992 [2024-06-09 09:10:18.267705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab51c0 with addr=10.0.0.2, port=4420 00:31:55.992 [2024-06-09 09:10:18.267717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab51c0 is same with the state(5) to be set 00:31:55.992 [2024-06-09 09:10:18.267735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab51c0 (9): Bad file descriptor 00:31:55.992 [2024-06-09 09:10:18.267766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:55.992 [2024-06-09 09:10:18.267775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:55.992 [2024-06-09 09:10:18.267783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:55.992 [2024-06-09 09:10:18.267799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.992 [2024-06-09 09:10:18.277111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.992 [2024-06-09 09:10:18.277688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.992 [2024-06-09 09:10:18.277727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab51c0 with addr=10.0.0.2, port=4420 00:31:55.992 [2024-06-09 09:10:18.277739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab51c0 is same with the state(5) to be set 00:31:55.992 [2024-06-09 09:10:18.277759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab51c0 (9): Bad file descriptor 00:31:55.992 [2024-06-09 09:10:18.277771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:55.992 [2024-06-09 09:10:18.277777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:55.992 [2024-06-09 09:10:18.277785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:55.992 [2024-06-09 09:10:18.277800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.992 [2024-06-09 09:10:18.287168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.992 [2024-06-09 09:10:18.287660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.992 [2024-06-09 09:10:18.287675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab51c0 with addr=10.0.0.2, port=4420 00:31:55.992 [2024-06-09 09:10:18.287682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab51c0 is same with the state(5) to be set 00:31:55.992 [2024-06-09 09:10:18.287693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab51c0 (9): Bad file descriptor 00:31:55.992 [2024-06-09 09:10:18.287703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:55.992 [2024-06-09 09:10:18.287719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:55.992 [2024-06-09 09:10:18.287726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:55.992 [2024-06-09 09:10:18.287737] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.992 [2024-06-09 09:10:18.297228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.992 [2024-06-09 09:10:18.297824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.992 [2024-06-09 09:10:18.297862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab51c0 with addr=10.0.0.2, port=4420 00:31:55.992 [2024-06-09 09:10:18.297873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab51c0 is same with the state(5) to be set 00:31:55.992 [2024-06-09 09:10:18.297891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab51c0 (9): Bad file descriptor 00:31:55.992 [2024-06-09 09:10:18.297903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:55.992 [2024-06-09 09:10:18.297910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:55.992 [2024-06-09 09:10:18.297918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:55.992 [2024-06-09 09:10:18.297933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:55.992 [2024-06-09 09:10:18.307287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.992 [2024-06-09 09:10:18.307883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.992 [2024-06-09 09:10:18.307921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab51c0 with addr=10.0.0.2, port=4420 00:31:55.992 [2024-06-09 09:10:18.307932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab51c0 is same with the state(5) to be set 00:31:55.992 [2024-06-09 09:10:18.307950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab51c0 (9): Bad file descriptor 00:31:55.992 [2024-06-09 09:10:18.307977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:55.992 [2024-06-09 09:10:18.307985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:55.992 [2024-06-09 09:10:18.307997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:55.992 [2024-06-09 09:10:18.308013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.992 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:55.992 [2024-06-09 09:10:18.317344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.992 [2024-06-09 09:10:18.317840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.992 [2024-06-09 09:10:18.317856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab51c0 with addr=10.0.0.2, port=4420 00:31:55.992 [2024-06-09 09:10:18.317863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab51c0 is same with the state(5) to be set 00:31:55.993 [2024-06-09 09:10:18.317875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab51c0 (9): Bad file descriptor 00:31:55.993 [2024-06-09 09:10:18.317891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:55.993 [2024-06-09 09:10:18.317898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:55.993 [2024-06-09 09:10:18.317906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:55.993 [2024-06-09 09:10:18.317917] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.993 [2024-06-09 09:10:18.327404] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.993 [2024-06-09 09:10:18.327902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.993 [2024-06-09 09:10:18.327915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab51c0 with addr=10.0.0.2, port=4420 00:31:55.993 [2024-06-09 09:10:18.327922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab51c0 is same with the state(5) to be set 00:31:55.993 [2024-06-09 09:10:18.327933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab51c0 (9): Bad file descriptor 00:31:55.993 [2024-06-09 09:10:18.327951] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:55.993 [2024-06-09 09:10:18.327958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:55.993 [2024-06-09 09:10:18.327965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:55.993 [2024-06-09 09:10:18.327975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.993 [2024-06-09 09:10:18.334977] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:55.993 [2024-06-09 09:10:18.334995] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.993 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.255 09:10:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.199 [2024-06-09 09:10:19.641806] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:57.199 [2024-06-09 09:10:19.641823] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:57.199 [2024-06-09 09:10:19.641836] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:57.460 [2024-06-09 09:10:19.772242] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:57.460 [2024-06-09 09:10:19.876496] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:57.460 [2024-06-09 09:10:19.876530] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.460 request: 00:31:57.460 { 00:31:57.460 "name": "nvme", 00:31:57.460 "trtype": "tcp", 00:31:57.460 "traddr": "10.0.0.2", 00:31:57.460 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:57.460 "adrfam": "ipv4", 00:31:57.460 "trsvcid": "8009", 00:31:57.460 "wait_for_attach": true, 00:31:57.460 "method": "bdev_nvme_start_discovery", 00:31:57.460 "req_id": 1 00:31:57.460 } 00:31:57.460 Got JSON-RPC error response 00:31:57.460 response: 00:31:57.460 { 00:31:57.460 "code": -17, 00:31:57.460 "message": "File exists" 00:31:57.460 } 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.460 09:10:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.460 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.722 request: 00:31:57.722 { 00:31:57.722 "name": "nvme_second", 00:31:57.722 "trtype": "tcp", 00:31:57.722 "traddr": "10.0.0.2", 00:31:57.722 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:57.722 "adrfam": "ipv4", 00:31:57.722 "trsvcid": "8009", 00:31:57.722 "wait_for_attach": true, 00:31:57.722 "method": "bdev_nvme_start_discovery", 00:31:57.722 "req_id": 1 00:31:57.722 } 00:31:57.722 Got JSON-RPC error response 00:31:57.722 response: 00:31:57.722 { 00:31:57.722 "code": -17, 00:31:57.722 "message": "File exists" 00:31:57.722 } 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.722 09:10:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.743 [2024-06-09 09:10:21.141534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.743 [2024-06-09 09:10:21.141566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab1230 with addr=10.0.0.2, port=8010 00:31:58.743 [2024-06-09 09:10:21.141579] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:58.743 [2024-06-09 09:10:21.141587] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:58.743 [2024-06-09 09:10:21.141594] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:59.685 [2024-06-09 09:10:22.143873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.685 [2024-06-09 09:10:22.143897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab1230 with addr=10.0.0.2, port=8010 00:31:59.685 [2024-06-09 09:10:22.143909] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:59.685 [2024-06-09 09:10:22.143916] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:59.685 [2024-06-09 09:10:22.143922] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:00.628 [2024-06-09 09:10:23.145729] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:00.628 request: 00:32:00.628 { 00:32:00.628 "name": "nvme_second", 00:32:00.628 "trtype": "tcp", 00:32:00.628 "traddr": "10.0.0.2", 00:32:00.628 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:00.628 "adrfam": "ipv4", 00:32:00.628 "trsvcid": "8010", 00:32:00.628 "attach_timeout_ms": 3000, 00:32:00.628 "method": "bdev_nvme_start_discovery", 00:32:00.628 "req_id": 1 00:32:00.628 } 00:32:00.628 Got JSON-RPC error response 00:32:00.628 response: 00:32:00.628 { 00:32:00.628 "code": -110, 00:32:00.628 "message": "Connection timed out" 00:32:00.628 } 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:00.628 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2781718 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:00.889 rmmod nvme_tcp 00:32:00.889 rmmod nvme_fabrics 00:32:00.889 rmmod nvme_keyring 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2781575 ']' 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2781575 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 2781575 ']' 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 2781575 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2781575 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2781575' 00:32:00.889 killing process with pid 2781575 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 2781575 00:32:00.889 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 2781575 00:32:01.149 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:01.149 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:01.149 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:01.149 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:01.149 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:01.149 09:10:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.149 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.149 09:10:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.058 09:10:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:03.058 00:32:03.058 real 0m18.966s 00:32:03.058 user 0m22.982s 00:32:03.058 sys 0m6.110s 00:32:03.058 09:10:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:03.058 09:10:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:03.058 ************************************ 00:32:03.058 END TEST nvmf_host_discovery 00:32:03.058 ************************************ 00:32:03.058 09:10:25 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:03.058 09:10:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:03.058 09:10:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:03.058 09:10:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:03.058 ************************************ 00:32:03.058 START TEST nvmf_host_multipath_status 00:32:03.058 ************************************ 00:32:03.058 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:03.319 * Looking for test storage... 00:32:03.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:03.319 09:10:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:09.943 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:09.943 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:09.943 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:09.943 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.943 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:09.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:32:09.943 00:32:09.943 --- 10.0.0.2 ping statistics --- 00:32:09.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.944 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:32:09.944 00:32:09.944 --- 10.0.0.1 ping statistics --- 00:32:09.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.944 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2787769 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2787769 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2787769 ']' 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:09.944 09:10:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:10.205 [2024-06-09 09:10:32.533397] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:10.205 [2024-06-09 09:10:32.533449] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.205 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.205 [2024-06-09 09:10:32.596270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:10.205 [2024-06-09 09:10:32.660297] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.205 [2024-06-09 09:10:32.660330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.205 [2024-06-09 09:10:32.660338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.205 [2024-06-09 09:10:32.660344] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.205 [2024-06-09 09:10:32.660350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.205 [2024-06-09 09:10:32.660482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.205 [2024-06-09 09:10:32.660595] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.777 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:10.777 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:32:10.777 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:10.777 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:10.777 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:11.038 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.038 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2787769 00:32:11.038 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:11.038 [2024-06-09 09:10:33.488052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.038 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:11.299 Malloc0 00:32:11.299 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:11.299 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:11.560 09:10:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.819 [2024-06-09 09:10:34.127623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:11.819 [2024-06-09 09:10:34.280002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2788136 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2788136 /var/tmp/bdevperf.sock 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2788136 ']' 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:11.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:11.819 09:10:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:12.761 09:10:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:12.761 09:10:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:32:12.761 09:10:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:12.762 09:10:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:13.333 Nvme0n1 00:32:13.333 09:10:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:13.593 Nvme0n1 00:32:13.593 09:10:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:13.593 09:10:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:15.507 09:10:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:15.507 09:10:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:15.768 09:10:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:16.028 09:10:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:16.972 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:16.972 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:16.972 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.972 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:17.233 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.233 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:17.233 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.233 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:17.233 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.233 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:17.233 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.233 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.495 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.495 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.495 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.495 09:10:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:17.756 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.756 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:17.756 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.756 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:17.756 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.756 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:17.756 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.756 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.017 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.017 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:18.017 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:18.017 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:18.279 09:10:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:19.221 09:10:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:19.221 09:10:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:19.221 09:10:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.221 09:10:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:19.482 09:10:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:19.482 09:10:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:19.482 09:10:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.482 09:10:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:19.743 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.743 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:19.743 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.743 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:19.743 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.743 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:19.743 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.743 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:20.005 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.005 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:20.005 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.005 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:20.266 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.266 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:20.266 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.266 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:20.266 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.266 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:20.266 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:20.527 09:10:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:20.787 09:10:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:21.730 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:21.730 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:21.730 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.730 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:21.730 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.730 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:21.730 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.992 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:21.992 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:21.992 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:21.992 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.992 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:22.253 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.253 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:22.253 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:22.253 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.253 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.253 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:22.253 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.253 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:22.513 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.513 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:22.513 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.513 09:10:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:22.775 09:10:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.775 09:10:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:22.775 09:10:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:22.775 09:10:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:23.036 09:10:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:23.979 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:23.979 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:23.979 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.979 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:24.239 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.239 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:24.239 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.239 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:24.500 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:24.500 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:24.500 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.500 09:10:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:24.500 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.500 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:24.500 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.500 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:24.761 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.761 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:24.761 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.761 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:25.021 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.021 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:25.021 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.022 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:25.022 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:25.022 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:25.022 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:25.282 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:25.282 09:10:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:26.666 09:10:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:26.666 09:10:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:26.666 09:10:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.666 09:10:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:26.666 09:10:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:26.666 09:10:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:26.666 09:10:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.666 09:10:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:26.666 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:26.666 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:26.666 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.666 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:26.927 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.927 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:26.927 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.927 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:26.927 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.927 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:26.927 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.927 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:27.201 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:27.201 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:27.201 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:27.201 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.522 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:27.522 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:27.522 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:27.522 09:10:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:27.782 09:10:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:28.726 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:28.726 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:28.726 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:28.726 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:28.987 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:28.987 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:28.987 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:28.987 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:28.987 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:28.987 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:28.987 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:28.987 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:29.249 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.249 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:29.249 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.249 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:29.249 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.249 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:29.510 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.510 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:29.510 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:29.510 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:29.510 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.510 09:10:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:29.771 09:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.771 09:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:29.771 09:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:29.771 09:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:30.033 09:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:30.294 09:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:31.237 09:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:31.237 09:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:31.237 09:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:31.237 09:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:31.498 09:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:31.498 09:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:31.498 09:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:31.498 09:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:31.498 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:31.498 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:31.498 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:31.498 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:31.758 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:31.758 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:31.758 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:31.758 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:32.019 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.019 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:32.019 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.019 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:32.019 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.019 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:32.019 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.019 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:32.279 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.279 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:32.279 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:32.540 09:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:32.540 09:10:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:33.483 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:33.483 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:33.483 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.483 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:33.744 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:33.744 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:33.744 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.744 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:34.005 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.005 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:34.005 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.005 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:34.005 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.005 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:34.005 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.005 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:34.265 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.266 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:34.266 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.266 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:34.527 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.527 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:34.527 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.527 09:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:34.527 09:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.527 09:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:34.527 09:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:34.787 09:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:35.048 09:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:35.991 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:35.991 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:35.991 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.991 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:35.991 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.991 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:35.991 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.991 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:36.252 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.252 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:36.252 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.252 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:36.514 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.514 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:36.514 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:36.514 09:10:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.514 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.514 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:36.514 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.514 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:36.775 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.775 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:36.775 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.775 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:37.036 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.036 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:37.036 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:37.036 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:37.297 09:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:38.242 09:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:38.242 09:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:38.242 09:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.242 09:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:38.503 09:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.503 09:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:38.503 09:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.503 09:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:38.764 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:38.764 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:38.764 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.764 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:38.764 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.764 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:38.764 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.764 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:39.025 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.025 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:39.025 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.025 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2788136 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2788136 ']' 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2788136 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2788136 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2788136' 00:32:39.286 killing process with pid 2788136 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2788136 00:32:39.286 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2788136 00:32:39.550 Connection closed with partial response: 00:32:39.550 00:32:39.550 00:32:39.550 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2788136 00:32:39.550 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:39.550 [2024-06-09 09:10:34.339117] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:39.550 [2024-06-09 09:10:34.339173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788136 ] 00:32:39.550 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.550 [2024-06-09 09:10:34.388962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.550 [2024-06-09 09:10:34.440968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:39.550 Running I/O for 90 seconds... 00:32:39.550 [2024-06-09 09:10:47.654649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.654946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.550 [2024-06-09 09:10:47.654963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.654974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.550 [2024-06-09 09:10:47.654980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.655123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.550 [2024-06-09 09:10:47.655130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.655143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.550 [2024-06-09 09:10:47.655148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.655160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.550 [2024-06-09 09:10:47.655165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.655176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.550 [2024-06-09 09:10:47.655182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.655193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.550 [2024-06-09 09:10:47.655198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.655212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.550 [2024-06-09 09:10:47.655217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.655758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.655765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:39.550 [2024-06-09 09:10:47.655778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.550 [2024-06-09 09:10:47.655784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.655801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.655819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.655837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.655855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.655873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.655891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.551 [2024-06-09 09:10:47.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.551 [2024-06-09 09:10:47.655927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.551 [2024-06-09 09:10:47.655976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.655993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.551 [2024-06-09 09:10:47.655998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.551 [2024-06-09 09:10:47.656017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.551 [2024-06-09 09:10:47.656036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.551 [2024-06-09 09:10:47.656055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.551 [2024-06-09 09:10:47.656075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:39.551 [2024-06-09 09:10:47.656847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-06-09 09:10:47.656852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.656867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.656872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.656888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.656893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.656908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.656913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.656928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.656933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.656948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.656953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.656969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.656974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.656989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.656994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.552 [2024-06-09 09:10:47.657217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.552 [2024-06-09 09:10:47.657238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.552 [2024-06-09 09:10:47.657257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.552 [2024-06-09 09:10:47.657278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.552 [2024-06-09 09:10:47.657299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.552 [2024-06-09 09:10:47.657320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.552 [2024-06-09 09:10:47.657340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.552 [2024-06-09 09:10:47.657360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:39.552 [2024-06-09 09:10:47.657650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-06-09 09:10:47.657655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:47.657672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:47.657677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:47.657693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:47.657699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:47.657716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:47.657721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:47.657737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:47.657742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:47.657760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:47.657766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.736808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.736842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.736872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.736879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.553 [2024-06-09 09:10:59.737445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.737590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.553 [2024-06-09 09:10:59.737606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.553 [2024-06-09 09:10:59.737621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.737631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.553 [2024-06-09 09:10:59.737636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.738440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.738453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.738465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.738471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.738484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.738489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.738499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.553 [2024-06-09 09:10:59.738504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.738514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.553 [2024-06-09 09:10:59.738520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:39.553 [2024-06-09 09:10:59.738530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-06-09 09:10:59.738535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:39.553 Received shutdown signal, test time was about 25.671747 seconds 00:32:39.553 00:32:39.553 Latency(us) 00:32:39.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.553 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:39.553 Verification LBA range: start 0x0 length 0x4000 00:32:39.553 Nvme0n1 : 25.67 11118.13 43.43 0.00 0.00 11493.85 291.84 3019898.88 00:32:39.553 =================================================================================================================== 00:32:39.553 Total : 11118.13 43.43 0.00 0.00 11493.85 291.84 3019898.88 00:32:39.553 09:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:39.553 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:39.553 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:39.815 rmmod nvme_tcp 00:32:39.815 rmmod nvme_fabrics 00:32:39.815 rmmod nvme_keyring 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2787769 ']' 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2787769 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2787769 ']' 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2787769 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2787769 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2787769' 00:32:39.815 killing process with pid 2787769 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2787769 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2787769 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:39.815 09:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.363 09:11:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:42.363 00:32:42.363 real 0m38.814s 00:32:42.363 user 1m41.435s 00:32:42.363 sys 0m10.160s 00:32:42.363 09:11:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:42.363 09:11:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.363 ************************************ 00:32:42.363 END TEST nvmf_host_multipath_status 00:32:42.363 ************************************ 00:32:42.363 09:11:04 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:42.363 09:11:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:42.363 09:11:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:42.363 09:11:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:42.363 ************************************ 00:32:42.363 START TEST nvmf_discovery_remove_ifc 00:32:42.363 ************************************ 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:42.363 * Looking for test storage... 00:32:42.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:42.363 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:42.364 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:42.364 09:11:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:48.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:48.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:48.957 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:48.958 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:48.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:48.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:32:48.958 00:32:48.958 --- 10.0.0.2 ping statistics --- 00:32:48.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.958 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:32:48.958 00:32:48.958 --- 10.0.0.1 ping statistics --- 00:32:48.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.958 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:48.958 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2797665 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2797665 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 2797665 ']' 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:49.249 09:11:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:49.249 [2024-06-09 09:11:11.581875] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:49.249 [2024-06-09 09:11:11.581936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:49.249 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.249 [2024-06-09 09:11:11.667695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.249 [2024-06-09 09:11:11.759169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:49.249 [2024-06-09 09:11:11.759226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:49.249 [2024-06-09 09:11:11.759234] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:49.249 [2024-06-09 09:11:11.759240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:49.249 [2024-06-09 09:11:11.759246] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:49.249 [2024-06-09 09:11:11.759272] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.821 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:49.821 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:32:49.821 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:49.821 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:49.821 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:50.082 [2024-06-09 09:11:12.418636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.082 [2024-06-09 09:11:12.426839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:50.082 null0 00:32:50.082 [2024-06-09 09:11:12.458820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2797700 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2797700 /tmp/host.sock 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 2797700 ']' 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:50.082 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:50.082 09:11:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:50.082 [2024-06-09 09:11:12.538084] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:50.082 [2024-06-09 09:11:12.538145] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797700 ] 00:32:50.082 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.082 [2024-06-09 09:11:12.602467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.342 [2024-06-09 09:11:12.678758] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.913 09:11:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:52.298 [2024-06-09 09:11:14.427675] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:52.298 [2024-06-09 09:11:14.427699] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:52.298 [2024-06-09 09:11:14.427715] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:52.298 [2024-06-09 09:11:14.514994] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:52.298 [2024-06-09 09:11:14.617738] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:52.298 [2024-06-09 09:11:14.617787] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:52.298 [2024-06-09 09:11:14.617809] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:52.298 [2024-06-09 09:11:14.617824] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:52.298 [2024-06-09 09:11:14.617848] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:52.298 [2024-06-09 09:11:14.627373] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf5c820 was disconnected and freed. delete nvme_qpair. 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:52.298 09:11:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:53.682 09:11:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:54.624 09:11:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:55.565 09:11:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:55.565 09:11:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.565 09:11:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:55.565 09:11:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.565 09:11:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:55.565 09:11:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.565 09:11:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:55.565 09:11:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.565 09:11:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:55.565 09:11:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:56.505 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:56.505 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:56.505 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:56.505 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:56.505 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:56.505 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:56.505 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:56.505 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:56.765 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:56.765 09:11:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:57.706 [2024-06-09 09:11:20.058101] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:57.706 [2024-06-09 09:11:20.058143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.706 [2024-06-09 09:11:20.058154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.706 [2024-06-09 09:11:20.058164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.706 [2024-06-09 09:11:20.058172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.706 [2024-06-09 09:11:20.058180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.706 [2024-06-09 09:11:20.058187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.706 [2024-06-09 09:11:20.058195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.706 [2024-06-09 09:11:20.058207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.706 [2024-06-09 09:11:20.058216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.706 [2024-06-09 09:11:20.058224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.706 [2024-06-09 09:11:20.058231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf23be0 is same with the state(5) to be set 00:32:57.707 [2024-06-09 09:11:20.068120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf23be0 (9): Bad file descriptor 00:32:57.707 09:11:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:57.707 09:11:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:57.707 09:11:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.707 09:11:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.707 09:11:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:57.707 09:11:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:57.707 09:11:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:57.707 [2024-06-09 09:11:20.078161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:58.648 [2024-06-09 09:11:21.138457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:58.648 [2024-06-09 09:11:21.138500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf23be0 with addr=10.0.0.2, port=4420 00:32:58.648 [2024-06-09 09:11:21.138513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf23be0 is same with the state(5) to be set 00:32:58.648 [2024-06-09 09:11:21.138540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf23be0 (9): Bad file descriptor 00:32:58.648 [2024-06-09 09:11:21.138883] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:58.648 [2024-06-09 09:11:21.138902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:58.648 [2024-06-09 09:11:21.138909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:58.648 [2024-06-09 09:11:21.138917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:58.648 [2024-06-09 09:11:21.138933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.648 [2024-06-09 09:11:21.138942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:58.648 09:11:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:58.648 09:11:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:58.648 09:11:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:59.591 [2024-06-09 09:11:22.141324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.592 [2024-06-09 09:11:22.141359] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:59.592 [2024-06-09 09:11:22.141381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:59.592 [2024-06-09 09:11:22.141391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:59.592 [2024-06-09 09:11:22.141405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:59.592 [2024-06-09 09:11:22.141412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:59.592 [2024-06-09 09:11:22.141425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:59.592 [2024-06-09 09:11:22.141432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:59.592 [2024-06-09 09:11:22.141440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:59.592 [2024-06-09 09:11:22.141447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:59.592 [2024-06-09 09:11:22.141455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:59.592 [2024-06-09 09:11:22.141462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:59.592 [2024-06-09 09:11:22.141469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:59.592 [2024-06-09 09:11:22.141833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf23070 (9): Bad file descriptor 00:32:59.592 [2024-06-09 09:11:22.142844] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:59.592 [2024-06-09 09:11:22.142856] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:59.853 09:11:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:01.242 09:11:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:01.816 [2024-06-09 09:11:24.162298] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:01.816 [2024-06-09 09:11:24.162319] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:01.816 [2024-06-09 09:11:24.162333] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:01.816 [2024-06-09 09:11:24.290760] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:02.077 [2024-06-09 09:11:24.477083] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:02.077 [2024-06-09 09:11:24.477123] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:02.077 [2024-06-09 09:11:24.477142] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:02.077 [2024-06-09 09:11:24.477157] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:02.077 [2024-06-09 09:11:24.477166] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:02.077 [2024-06-09 09:11:24.480534] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf33740 was disconnected and freed. delete nvme_qpair. 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:02.077 09:11:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2797700 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 2797700 ']' 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 2797700 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:03.020 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2797700 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2797700' 00:33:03.282 killing process with pid 2797700 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 2797700 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 2797700 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:03.282 rmmod nvme_tcp 00:33:03.282 rmmod nvme_fabrics 00:33:03.282 rmmod nvme_keyring 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2797665 ']' 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2797665 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 2797665 ']' 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 2797665 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:03.282 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2797665 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2797665' 00:33:03.543 killing process with pid 2797665 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 2797665 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 2797665 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.543 09:11:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.093 09:11:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:06.093 00:33:06.093 real 0m23.546s 00:33:06.093 user 0m28.992s 00:33:06.093 sys 0m6.448s 00:33:06.093 09:11:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:06.093 09:11:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:06.093 ************************************ 00:33:06.093 END TEST nvmf_discovery_remove_ifc 00:33:06.093 ************************************ 00:33:06.093 09:11:28 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:06.093 09:11:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:06.093 09:11:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:06.093 09:11:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:06.093 ************************************ 00:33:06.093 START TEST nvmf_identify_kernel_target 00:33:06.093 ************************************ 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:06.093 * Looking for test storage... 00:33:06.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.093 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:06.094 09:11:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:12.723 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:12.723 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:12.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:12.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.723 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.724 09:11:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:12.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:33:12.724 00:33:12.724 --- 10.0.0.2 ping statistics --- 00:33:12.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.724 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:33:12.724 00:33:12.724 --- 10.0.0.1 ping statistics --- 00:33:12.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.724 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:12.724 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.016 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:13.017 09:11:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:16.320 Waiting for block devices as requested 00:33:16.320 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:16.320 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:16.320 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:16.320 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:16.581 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:16.581 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:16.581 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:16.842 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:16.842 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:17.103 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:17.103 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:17.103 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:17.103 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:17.364 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:17.364 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:17.364 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:17.626 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:17.887 No valid GPT data, bailing 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:17.887 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:17.888 00:33:17.888 Discovery Log Number of Records 2, Generation counter 2 00:33:17.888 =====Discovery Log Entry 0====== 00:33:17.888 trtype: tcp 00:33:17.888 adrfam: ipv4 00:33:17.888 subtype: current discovery subsystem 00:33:17.888 treq: not specified, sq flow control disable supported 00:33:17.888 portid: 1 00:33:17.888 trsvcid: 4420 00:33:17.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:17.888 traddr: 10.0.0.1 00:33:17.888 eflags: none 00:33:17.888 sectype: none 00:33:17.888 =====Discovery Log Entry 1====== 00:33:17.888 trtype: tcp 00:33:17.888 adrfam: ipv4 00:33:17.888 subtype: nvme subsystem 00:33:17.888 treq: not specified, sq flow control disable supported 00:33:17.888 portid: 1 00:33:17.888 trsvcid: 4420 00:33:17.888 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:17.888 traddr: 10.0.0.1 00:33:17.888 eflags: none 00:33:17.888 sectype: none 00:33:17.888 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:17.888 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:17.888 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.888 ===================================================== 00:33:17.888 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:17.888 ===================================================== 00:33:17.888 Controller Capabilities/Features 00:33:17.888 ================================ 00:33:17.888 Vendor ID: 0000 00:33:17.888 Subsystem Vendor ID: 0000 00:33:17.888 Serial Number: 8aba77ac44da63947f69 00:33:17.888 Model Number: Linux 00:33:17.888 Firmware Version: 6.7.0-68 00:33:17.888 Recommended Arb Burst: 0 00:33:17.888 IEEE OUI Identifier: 00 00 00 00:33:17.888 Multi-path I/O 00:33:17.888 May have multiple subsystem ports: No 00:33:17.888 May have multiple controllers: No 00:33:17.888 Associated with SR-IOV VF: No 00:33:17.888 Max Data Transfer Size: Unlimited 00:33:17.888 Max Number of Namespaces: 0 00:33:17.888 Max Number of I/O Queues: 1024 00:33:17.888 NVMe Specification Version (VS): 1.3 00:33:17.888 NVMe Specification Version (Identify): 1.3 00:33:17.888 Maximum Queue Entries: 1024 00:33:17.888 Contiguous Queues Required: No 00:33:17.888 Arbitration Mechanisms Supported 00:33:17.888 Weighted Round Robin: Not Supported 00:33:17.888 Vendor Specific: Not Supported 00:33:17.888 Reset Timeout: 7500 ms 00:33:17.888 Doorbell Stride: 4 bytes 00:33:17.888 NVM Subsystem Reset: Not Supported 00:33:17.888 Command Sets Supported 00:33:17.888 NVM Command Set: Supported 00:33:17.888 Boot Partition: Not Supported 00:33:17.888 Memory Page Size Minimum: 4096 bytes 00:33:17.888 Memory Page Size Maximum: 4096 bytes 00:33:17.888 Persistent Memory Region: Not Supported 00:33:17.888 Optional Asynchronous Events Supported 00:33:17.888 Namespace Attribute Notices: Not Supported 00:33:17.888 Firmware Activation Notices: Not Supported 00:33:17.888 ANA Change Notices: Not Supported 00:33:17.888 PLE Aggregate Log Change Notices: Not Supported 00:33:17.888 LBA Status Info Alert Notices: Not Supported 00:33:17.888 EGE Aggregate Log Change Notices: Not Supported 00:33:17.888 Normal NVM Subsystem Shutdown event: Not Supported 00:33:17.888 Zone Descriptor Change Notices: Not Supported 00:33:17.888 Discovery Log Change Notices: Supported 00:33:17.888 Controller Attributes 00:33:17.888 128-bit Host Identifier: Not Supported 00:33:17.888 Non-Operational Permissive Mode: Not Supported 00:33:17.888 NVM Sets: Not Supported 00:33:17.888 Read Recovery Levels: Not Supported 00:33:17.888 Endurance Groups: Not Supported 00:33:17.888 Predictable Latency Mode: Not Supported 00:33:17.888 Traffic Based Keep ALive: Not Supported 00:33:17.888 Namespace Granularity: Not Supported 00:33:17.888 SQ Associations: Not Supported 00:33:17.888 UUID List: Not Supported 00:33:17.888 Multi-Domain Subsystem: Not Supported 00:33:17.888 Fixed Capacity Management: Not Supported 00:33:17.888 Variable Capacity Management: Not Supported 00:33:17.888 Delete Endurance Group: Not Supported 00:33:17.888 Delete NVM Set: Not Supported 00:33:17.888 Extended LBA Formats Supported: Not Supported 00:33:17.888 Flexible Data Placement Supported: Not Supported 00:33:17.888 00:33:17.888 Controller Memory Buffer Support 00:33:17.888 ================================ 00:33:17.888 Supported: No 00:33:17.888 00:33:17.888 Persistent Memory Region Support 00:33:17.888 ================================ 00:33:17.888 Supported: No 00:33:17.888 00:33:17.888 Admin Command Set Attributes 00:33:17.888 ============================ 00:33:17.888 Security Send/Receive: Not Supported 00:33:17.888 Format NVM: Not Supported 00:33:17.888 Firmware Activate/Download: Not Supported 00:33:17.888 Namespace Management: Not Supported 00:33:17.888 Device Self-Test: Not Supported 00:33:17.888 Directives: Not Supported 00:33:17.888 NVMe-MI: Not Supported 00:33:17.888 Virtualization Management: Not Supported 00:33:17.888 Doorbell Buffer Config: Not Supported 00:33:17.888 Get LBA Status Capability: Not Supported 00:33:17.888 Command & Feature Lockdown Capability: Not Supported 00:33:17.888 Abort Command Limit: 1 00:33:17.888 Async Event Request Limit: 1 00:33:17.888 Number of Firmware Slots: N/A 00:33:17.888 Firmware Slot 1 Read-Only: N/A 00:33:17.888 Firmware Activation Without Reset: N/A 00:33:17.888 Multiple Update Detection Support: N/A 00:33:17.888 Firmware Update Granularity: No Information Provided 00:33:17.888 Per-Namespace SMART Log: No 00:33:17.888 Asymmetric Namespace Access Log Page: Not Supported 00:33:17.888 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:17.888 Command Effects Log Page: Not Supported 00:33:17.888 Get Log Page Extended Data: Supported 00:33:17.888 Telemetry Log Pages: Not Supported 00:33:17.888 Persistent Event Log Pages: Not Supported 00:33:17.888 Supported Log Pages Log Page: May Support 00:33:17.888 Commands Supported & Effects Log Page: Not Supported 00:33:17.888 Feature Identifiers & Effects Log Page:May Support 00:33:17.888 NVMe-MI Commands & Effects Log Page: May Support 00:33:17.888 Data Area 4 for Telemetry Log: Not Supported 00:33:17.888 Error Log Page Entries Supported: 1 00:33:17.888 Keep Alive: Not Supported 00:33:17.888 00:33:17.888 NVM Command Set Attributes 00:33:17.888 ========================== 00:33:17.888 Submission Queue Entry Size 00:33:17.888 Max: 1 00:33:17.888 Min: 1 00:33:17.888 Completion Queue Entry Size 00:33:17.888 Max: 1 00:33:17.888 Min: 1 00:33:17.888 Number of Namespaces: 0 00:33:17.888 Compare Command: Not Supported 00:33:17.888 Write Uncorrectable Command: Not Supported 00:33:17.888 Dataset Management Command: Not Supported 00:33:17.888 Write Zeroes Command: Not Supported 00:33:17.888 Set Features Save Field: Not Supported 00:33:17.888 Reservations: Not Supported 00:33:17.888 Timestamp: Not Supported 00:33:17.888 Copy: Not Supported 00:33:17.888 Volatile Write Cache: Not Present 00:33:17.888 Atomic Write Unit (Normal): 1 00:33:17.888 Atomic Write Unit (PFail): 1 00:33:17.888 Atomic Compare & Write Unit: 1 00:33:17.888 Fused Compare & Write: Not Supported 00:33:17.888 Scatter-Gather List 00:33:17.888 SGL Command Set: Supported 00:33:17.888 SGL Keyed: Not Supported 00:33:17.888 SGL Bit Bucket Descriptor: Not Supported 00:33:17.888 SGL Metadata Pointer: Not Supported 00:33:17.888 Oversized SGL: Not Supported 00:33:17.888 SGL Metadata Address: Not Supported 00:33:17.888 SGL Offset: Supported 00:33:17.888 Transport SGL Data Block: Not Supported 00:33:17.888 Replay Protected Memory Block: Not Supported 00:33:17.888 00:33:17.888 Firmware Slot Information 00:33:17.888 ========================= 00:33:17.888 Active slot: 0 00:33:17.888 00:33:17.888 00:33:17.888 Error Log 00:33:17.888 ========= 00:33:17.888 00:33:17.888 Active Namespaces 00:33:17.888 ================= 00:33:17.888 Discovery Log Page 00:33:17.888 ================== 00:33:17.888 Generation Counter: 2 00:33:17.888 Number of Records: 2 00:33:17.888 Record Format: 0 00:33:17.888 00:33:17.888 Discovery Log Entry 0 00:33:17.888 ---------------------- 00:33:17.888 Transport Type: 3 (TCP) 00:33:17.888 Address Family: 1 (IPv4) 00:33:17.888 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:17.888 Entry Flags: 00:33:17.888 Duplicate Returned Information: 0 00:33:17.888 Explicit Persistent Connection Support for Discovery: 0 00:33:17.888 Transport Requirements: 00:33:17.888 Secure Channel: Not Specified 00:33:17.888 Port ID: 1 (0x0001) 00:33:17.888 Controller ID: 65535 (0xffff) 00:33:17.888 Admin Max SQ Size: 32 00:33:17.888 Transport Service Identifier: 4420 00:33:17.888 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:17.888 Transport Address: 10.0.0.1 00:33:17.888 Discovery Log Entry 1 00:33:17.888 ---------------------- 00:33:17.888 Transport Type: 3 (TCP) 00:33:17.888 Address Family: 1 (IPv4) 00:33:17.888 Subsystem Type: 2 (NVM Subsystem) 00:33:17.889 Entry Flags: 00:33:17.889 Duplicate Returned Information: 0 00:33:17.889 Explicit Persistent Connection Support for Discovery: 0 00:33:17.889 Transport Requirements: 00:33:17.889 Secure Channel: Not Specified 00:33:17.889 Port ID: 1 (0x0001) 00:33:17.889 Controller ID: 65535 (0xffff) 00:33:17.889 Admin Max SQ Size: 32 00:33:17.889 Transport Service Identifier: 4420 00:33:17.889 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:17.889 Transport Address: 10.0.0.1 00:33:17.889 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:18.151 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.151 get_feature(0x01) failed 00:33:18.151 get_feature(0x02) failed 00:33:18.151 get_feature(0x04) failed 00:33:18.151 ===================================================== 00:33:18.151 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:18.151 ===================================================== 00:33:18.151 Controller Capabilities/Features 00:33:18.151 ================================ 00:33:18.151 Vendor ID: 0000 00:33:18.151 Subsystem Vendor ID: 0000 00:33:18.151 Serial Number: 6ca24f8e1df4e3d6c72e 00:33:18.151 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:18.151 Firmware Version: 6.7.0-68 00:33:18.151 Recommended Arb Burst: 6 00:33:18.151 IEEE OUI Identifier: 00 00 00 00:33:18.151 Multi-path I/O 00:33:18.151 May have multiple subsystem ports: Yes 00:33:18.151 May have multiple controllers: Yes 00:33:18.151 Associated with SR-IOV VF: No 00:33:18.151 Max Data Transfer Size: Unlimited 00:33:18.151 Max Number of Namespaces: 1024 00:33:18.151 Max Number of I/O Queues: 128 00:33:18.151 NVMe Specification Version (VS): 1.3 00:33:18.151 NVMe Specification Version (Identify): 1.3 00:33:18.151 Maximum Queue Entries: 1024 00:33:18.151 Contiguous Queues Required: No 00:33:18.151 Arbitration Mechanisms Supported 00:33:18.151 Weighted Round Robin: Not Supported 00:33:18.151 Vendor Specific: Not Supported 00:33:18.151 Reset Timeout: 7500 ms 00:33:18.151 Doorbell Stride: 4 bytes 00:33:18.151 NVM Subsystem Reset: Not Supported 00:33:18.151 Command Sets Supported 00:33:18.151 NVM Command Set: Supported 00:33:18.151 Boot Partition: Not Supported 00:33:18.151 Memory Page Size Minimum: 4096 bytes 00:33:18.151 Memory Page Size Maximum: 4096 bytes 00:33:18.151 Persistent Memory Region: Not Supported 00:33:18.151 Optional Asynchronous Events Supported 00:33:18.151 Namespace Attribute Notices: Supported 00:33:18.151 Firmware Activation Notices: Not Supported 00:33:18.151 ANA Change Notices: Supported 00:33:18.151 PLE Aggregate Log Change Notices: Not Supported 00:33:18.151 LBA Status Info Alert Notices: Not Supported 00:33:18.151 EGE Aggregate Log Change Notices: Not Supported 00:33:18.151 Normal NVM Subsystem Shutdown event: Not Supported 00:33:18.151 Zone Descriptor Change Notices: Not Supported 00:33:18.151 Discovery Log Change Notices: Not Supported 00:33:18.151 Controller Attributes 00:33:18.151 128-bit Host Identifier: Supported 00:33:18.151 Non-Operational Permissive Mode: Not Supported 00:33:18.151 NVM Sets: Not Supported 00:33:18.151 Read Recovery Levels: Not Supported 00:33:18.151 Endurance Groups: Not Supported 00:33:18.151 Predictable Latency Mode: Not Supported 00:33:18.151 Traffic Based Keep ALive: Supported 00:33:18.151 Namespace Granularity: Not Supported 00:33:18.151 SQ Associations: Not Supported 00:33:18.151 UUID List: Not Supported 00:33:18.151 Multi-Domain Subsystem: Not Supported 00:33:18.151 Fixed Capacity Management: Not Supported 00:33:18.151 Variable Capacity Management: Not Supported 00:33:18.151 Delete Endurance Group: Not Supported 00:33:18.151 Delete NVM Set: Not Supported 00:33:18.151 Extended LBA Formats Supported: Not Supported 00:33:18.151 Flexible Data Placement Supported: Not Supported 00:33:18.151 00:33:18.151 Controller Memory Buffer Support 00:33:18.151 ================================ 00:33:18.151 Supported: No 00:33:18.151 00:33:18.151 Persistent Memory Region Support 00:33:18.151 ================================ 00:33:18.151 Supported: No 00:33:18.151 00:33:18.151 Admin Command Set Attributes 00:33:18.151 ============================ 00:33:18.151 Security Send/Receive: Not Supported 00:33:18.151 Format NVM: Not Supported 00:33:18.151 Firmware Activate/Download: Not Supported 00:33:18.151 Namespace Management: Not Supported 00:33:18.151 Device Self-Test: Not Supported 00:33:18.151 Directives: Not Supported 00:33:18.151 NVMe-MI: Not Supported 00:33:18.151 Virtualization Management: Not Supported 00:33:18.151 Doorbell Buffer Config: Not Supported 00:33:18.151 Get LBA Status Capability: Not Supported 00:33:18.151 Command & Feature Lockdown Capability: Not Supported 00:33:18.151 Abort Command Limit: 4 00:33:18.151 Async Event Request Limit: 4 00:33:18.151 Number of Firmware Slots: N/A 00:33:18.151 Firmware Slot 1 Read-Only: N/A 00:33:18.151 Firmware Activation Without Reset: N/A 00:33:18.151 Multiple Update Detection Support: N/A 00:33:18.151 Firmware Update Granularity: No Information Provided 00:33:18.151 Per-Namespace SMART Log: Yes 00:33:18.151 Asymmetric Namespace Access Log Page: Supported 00:33:18.151 ANA Transition Time : 10 sec 00:33:18.151 00:33:18.151 Asymmetric Namespace Access Capabilities 00:33:18.151 ANA Optimized State : Supported 00:33:18.151 ANA Non-Optimized State : Supported 00:33:18.151 ANA Inaccessible State : Supported 00:33:18.151 ANA Persistent Loss State : Supported 00:33:18.151 ANA Change State : Supported 00:33:18.151 ANAGRPID is not changed : No 00:33:18.151 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:18.151 00:33:18.151 ANA Group Identifier Maximum : 128 00:33:18.151 Number of ANA Group Identifiers : 128 00:33:18.151 Max Number of Allowed Namespaces : 1024 00:33:18.151 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:18.151 Command Effects Log Page: Supported 00:33:18.151 Get Log Page Extended Data: Supported 00:33:18.151 Telemetry Log Pages: Not Supported 00:33:18.151 Persistent Event Log Pages: Not Supported 00:33:18.151 Supported Log Pages Log Page: May Support 00:33:18.151 Commands Supported & Effects Log Page: Not Supported 00:33:18.151 Feature Identifiers & Effects Log Page:May Support 00:33:18.151 NVMe-MI Commands & Effects Log Page: May Support 00:33:18.151 Data Area 4 for Telemetry Log: Not Supported 00:33:18.151 Error Log Page Entries Supported: 128 00:33:18.151 Keep Alive: Supported 00:33:18.151 Keep Alive Granularity: 1000 ms 00:33:18.151 00:33:18.151 NVM Command Set Attributes 00:33:18.151 ========================== 00:33:18.151 Submission Queue Entry Size 00:33:18.151 Max: 64 00:33:18.151 Min: 64 00:33:18.151 Completion Queue Entry Size 00:33:18.151 Max: 16 00:33:18.151 Min: 16 00:33:18.151 Number of Namespaces: 1024 00:33:18.151 Compare Command: Not Supported 00:33:18.151 Write Uncorrectable Command: Not Supported 00:33:18.151 Dataset Management Command: Supported 00:33:18.151 Write Zeroes Command: Supported 00:33:18.151 Set Features Save Field: Not Supported 00:33:18.151 Reservations: Not Supported 00:33:18.151 Timestamp: Not Supported 00:33:18.151 Copy: Not Supported 00:33:18.151 Volatile Write Cache: Present 00:33:18.151 Atomic Write Unit (Normal): 1 00:33:18.151 Atomic Write Unit (PFail): 1 00:33:18.151 Atomic Compare & Write Unit: 1 00:33:18.151 Fused Compare & Write: Not Supported 00:33:18.151 Scatter-Gather List 00:33:18.151 SGL Command Set: Supported 00:33:18.151 SGL Keyed: Not Supported 00:33:18.151 SGL Bit Bucket Descriptor: Not Supported 00:33:18.151 SGL Metadata Pointer: Not Supported 00:33:18.151 Oversized SGL: Not Supported 00:33:18.151 SGL Metadata Address: Not Supported 00:33:18.151 SGL Offset: Supported 00:33:18.151 Transport SGL Data Block: Not Supported 00:33:18.151 Replay Protected Memory Block: Not Supported 00:33:18.151 00:33:18.151 Firmware Slot Information 00:33:18.151 ========================= 00:33:18.151 Active slot: 0 00:33:18.151 00:33:18.151 Asymmetric Namespace Access 00:33:18.151 =========================== 00:33:18.151 Change Count : 0 00:33:18.151 Number of ANA Group Descriptors : 1 00:33:18.151 ANA Group Descriptor : 0 00:33:18.151 ANA Group ID : 1 00:33:18.151 Number of NSID Values : 1 00:33:18.151 Change Count : 0 00:33:18.151 ANA State : 1 00:33:18.151 Namespace Identifier : 1 00:33:18.151 00:33:18.151 Commands Supported and Effects 00:33:18.151 ============================== 00:33:18.151 Admin Commands 00:33:18.151 -------------- 00:33:18.151 Get Log Page (02h): Supported 00:33:18.151 Identify (06h): Supported 00:33:18.151 Abort (08h): Supported 00:33:18.151 Set Features (09h): Supported 00:33:18.151 Get Features (0Ah): Supported 00:33:18.152 Asynchronous Event Request (0Ch): Supported 00:33:18.152 Keep Alive (18h): Supported 00:33:18.152 I/O Commands 00:33:18.152 ------------ 00:33:18.152 Flush (00h): Supported 00:33:18.152 Write (01h): Supported LBA-Change 00:33:18.152 Read (02h): Supported 00:33:18.152 Write Zeroes (08h): Supported LBA-Change 00:33:18.152 Dataset Management (09h): Supported 00:33:18.152 00:33:18.152 Error Log 00:33:18.152 ========= 00:33:18.152 Entry: 0 00:33:18.152 Error Count: 0x3 00:33:18.152 Submission Queue Id: 0x0 00:33:18.152 Command Id: 0x5 00:33:18.152 Phase Bit: 0 00:33:18.152 Status Code: 0x2 00:33:18.152 Status Code Type: 0x0 00:33:18.152 Do Not Retry: 1 00:33:18.152 Error Location: 0x28 00:33:18.152 LBA: 0x0 00:33:18.152 Namespace: 0x0 00:33:18.152 Vendor Log Page: 0x0 00:33:18.152 ----------- 00:33:18.152 Entry: 1 00:33:18.152 Error Count: 0x2 00:33:18.152 Submission Queue Id: 0x0 00:33:18.152 Command Id: 0x5 00:33:18.152 Phase Bit: 0 00:33:18.152 Status Code: 0x2 00:33:18.152 Status Code Type: 0x0 00:33:18.152 Do Not Retry: 1 00:33:18.152 Error Location: 0x28 00:33:18.152 LBA: 0x0 00:33:18.152 Namespace: 0x0 00:33:18.152 Vendor Log Page: 0x0 00:33:18.152 ----------- 00:33:18.152 Entry: 2 00:33:18.152 Error Count: 0x1 00:33:18.152 Submission Queue Id: 0x0 00:33:18.152 Command Id: 0x4 00:33:18.152 Phase Bit: 0 00:33:18.152 Status Code: 0x2 00:33:18.152 Status Code Type: 0x0 00:33:18.152 Do Not Retry: 1 00:33:18.152 Error Location: 0x28 00:33:18.152 LBA: 0x0 00:33:18.152 Namespace: 0x0 00:33:18.152 Vendor Log Page: 0x0 00:33:18.152 00:33:18.152 Number of Queues 00:33:18.152 ================ 00:33:18.152 Number of I/O Submission Queues: 128 00:33:18.152 Number of I/O Completion Queues: 128 00:33:18.152 00:33:18.152 ZNS Specific Controller Data 00:33:18.152 ============================ 00:33:18.152 Zone Append Size Limit: 0 00:33:18.152 00:33:18.152 00:33:18.152 Active Namespaces 00:33:18.152 ================= 00:33:18.152 get_feature(0x05) failed 00:33:18.152 Namespace ID:1 00:33:18.152 Command Set Identifier: NVM (00h) 00:33:18.152 Deallocate: Supported 00:33:18.152 Deallocated/Unwritten Error: Not Supported 00:33:18.152 Deallocated Read Value: Unknown 00:33:18.152 Deallocate in Write Zeroes: Not Supported 00:33:18.152 Deallocated Guard Field: 0xFFFF 00:33:18.152 Flush: Supported 00:33:18.152 Reservation: Not Supported 00:33:18.152 Namespace Sharing Capabilities: Multiple Controllers 00:33:18.152 Size (in LBAs): 3750748848 (1788GiB) 00:33:18.152 Capacity (in LBAs): 3750748848 (1788GiB) 00:33:18.152 Utilization (in LBAs): 3750748848 (1788GiB) 00:33:18.152 UUID: 6938b331-50c2-42e8-9018-362ee7401edf 00:33:18.152 Thin Provisioning: Not Supported 00:33:18.152 Per-NS Atomic Units: Yes 00:33:18.152 Atomic Write Unit (Normal): 8 00:33:18.152 Atomic Write Unit (PFail): 8 00:33:18.152 Preferred Write Granularity: 8 00:33:18.152 Atomic Compare & Write Unit: 8 00:33:18.152 Atomic Boundary Size (Normal): 0 00:33:18.152 Atomic Boundary Size (PFail): 0 00:33:18.152 Atomic Boundary Offset: 0 00:33:18.152 NGUID/EUI64 Never Reused: No 00:33:18.152 ANA group ID: 1 00:33:18.152 Namespace Write Protected: No 00:33:18.152 Number of LBA Formats: 1 00:33:18.152 Current LBA Format: LBA Format #00 00:33:18.152 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:18.152 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:18.152 rmmod nvme_tcp 00:33:18.152 rmmod nvme_fabrics 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:18.152 09:11:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.067 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:20.067 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:20.067 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:20.067 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:20.328 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:20.328 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:20.328 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:20.328 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:20.328 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:20.328 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:20.328 09:11:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:23.634 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:23.634 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:23.634 00:33:23.634 real 0m18.006s 00:33:23.634 user 0m4.679s 00:33:23.634 sys 0m10.291s 00:33:23.634 09:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:23.634 09:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.634 ************************************ 00:33:23.634 END TEST nvmf_identify_kernel_target 00:33:23.634 ************************************ 00:33:23.634 09:11:46 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:23.634 09:11:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:23.635 09:11:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:23.635 09:11:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.897 ************************************ 00:33:23.897 START TEST nvmf_auth_host 00:33:23.897 ************************************ 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:23.897 * Looking for test storage... 00:33:23.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:23.897 09:11:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:30.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:30.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:30.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.491 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:30.492 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.492 09:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.492 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.492 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.492 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:30.492 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:30.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:33:30.753 00:33:30.753 --- 10.0.0.2 ping statistics --- 00:33:30.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.753 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:33:30.753 00:33:30.753 --- 10.0.0.1 ping statistics --- 00:33:30.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.753 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2811849 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2811849 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2811849 ']' 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:30.753 09:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ef75f238d9737e8e8e7b580e3e03a130 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tuu 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ef75f238d9737e8e8e7b580e3e03a130 0 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ef75f238d9737e8e8e7b580e3e03a130 0 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ef75f238d9737e8e8e7b580e3e03a130 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:31.695 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tuu 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tuu 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tuu 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d3d95c8ed3b531d7ac5c6284eab3f123db9c2cbba62e4fd17d88ccaddde977b8 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sXM 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d3d95c8ed3b531d7ac5c6284eab3f123db9c2cbba62e4fd17d88ccaddde977b8 3 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d3d95c8ed3b531d7ac5c6284eab3f123db9c2cbba62e4fd17d88ccaddde977b8 3 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d3d95c8ed3b531d7ac5c6284eab3f123db9c2cbba62e4fd17d88ccaddde977b8 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sXM 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sXM 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.sXM 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1919d305919cd338ecf35979126e9c1f9e09b4c69c8d61b8 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JEG 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1919d305919cd338ecf35979126e9c1f9e09b4c69c8d61b8 0 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1919d305919cd338ecf35979126e9c1f9e09b4c69c8d61b8 0 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1919d305919cd338ecf35979126e9c1f9e09b4c69c8d61b8 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JEG 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JEG 00:33:31.696 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JEG 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e8d17554197f6456e06e07bec28c6ceb6cdac854905e1eb 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jmX 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e8d17554197f6456e06e07bec28c6ceb6cdac854905e1eb 2 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e8d17554197f6456e06e07bec28c6ceb6cdac854905e1eb 2 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e8d17554197f6456e06e07bec28c6ceb6cdac854905e1eb 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jmX 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jmX 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jmX 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1b01e62d3dbf2c639339ea32d8e67e87 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Sp4 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1b01e62d3dbf2c639339ea32d8e67e87 1 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1b01e62d3dbf2c639339ea32d8e67e87 1 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1b01e62d3dbf2c639339ea32d8e67e87 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:31.957 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Sp4 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Sp4 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Sp4 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=98e20cc01cda1dd9128501a38234214a 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Lkk 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 98e20cc01cda1dd9128501a38234214a 1 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 98e20cc01cda1dd9128501a38234214a 1 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=98e20cc01cda1dd9128501a38234214a 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Lkk 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Lkk 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Lkk 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=783b60f28eb18036310e889eb916b09cbc340127a69827fc 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bRL 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 783b60f28eb18036310e889eb916b09cbc340127a69827fc 2 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 783b60f28eb18036310e889eb916b09cbc340127a69827fc 2 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=783b60f28eb18036310e889eb916b09cbc340127a69827fc 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bRL 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bRL 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bRL 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=33d4d7d8ff2a7318cb4f769b9d239ee3 00:33:31.958 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Gz2 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 33d4d7d8ff2a7318cb4f769b9d239ee3 0 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 33d4d7d8ff2a7318cb4f769b9d239ee3 0 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=33d4d7d8ff2a7318cb4f769b9d239ee3 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:32.218 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Gz2 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Gz2 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Gz2 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1e75c7dbc10822509020993a992ccb7d000019a102edf2740b813aba1f46faf4 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gCD 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1e75c7dbc10822509020993a992ccb7d000019a102edf2740b813aba1f46faf4 3 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1e75c7dbc10822509020993a992ccb7d000019a102edf2740b813aba1f46faf4 3 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1e75c7dbc10822509020993a992ccb7d000019a102edf2740b813aba1f46faf4 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gCD 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gCD 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gCD 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2811849 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2811849 ']' 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:32.219 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tuu 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.sXM ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sXM 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JEG 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jmX ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jmX 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Sp4 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Lkk ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lkk 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bRL 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.480 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Gz2 ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Gz2 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gCD 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:32.481 09:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:35.781 Waiting for block devices as requested 00:33:35.781 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:35.781 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:35.781 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:35.781 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:36.041 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:36.041 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:36.041 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:36.302 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:36.302 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:36.562 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:36.562 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:36.562 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:36.823 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:36.823 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:36.823 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:36.823 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:37.083 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:38.052 No valid GPT data, bailing 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:38.052 00:33:38.052 Discovery Log Number of Records 2, Generation counter 2 00:33:38.052 =====Discovery Log Entry 0====== 00:33:38.052 trtype: tcp 00:33:38.052 adrfam: ipv4 00:33:38.052 subtype: current discovery subsystem 00:33:38.052 treq: not specified, sq flow control disable supported 00:33:38.052 portid: 1 00:33:38.052 trsvcid: 4420 00:33:38.052 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:38.052 traddr: 10.0.0.1 00:33:38.052 eflags: none 00:33:38.052 sectype: none 00:33:38.052 =====Discovery Log Entry 1====== 00:33:38.052 trtype: tcp 00:33:38.052 adrfam: ipv4 00:33:38.052 subtype: nvme subsystem 00:33:38.052 treq: not specified, sq flow control disable supported 00:33:38.052 portid: 1 00:33:38.052 trsvcid: 4420 00:33:38.052 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:38.052 traddr: 10.0.0.1 00:33:38.052 eflags: none 00:33:38.052 sectype: none 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.052 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.053 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.336 nvme0n1 00:33:38.336 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.336 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.336 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.336 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.336 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.336 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.336 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.336 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.337 nvme0n1 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.337 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.599 09:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.599 nvme0n1 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.599 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.859 nvme0n1 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.859 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.120 nvme0n1 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.120 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.121 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.382 nvme0n1 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.382 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:39.383 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.383 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:39.383 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:39.383 09:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:39.383 09:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:39.383 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.383 09:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.643 nvme0n1 00:33:39.643 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.643 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.643 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.643 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.644 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.904 nvme0n1 00:33:39.904 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.904 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.904 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.904 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.904 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.904 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.905 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.166 nvme0n1 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.166 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.428 nvme0n1 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.428 09:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.703 nvme0n1 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:40.703 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:40.704 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:40.705 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:40.706 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.706 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.973 nvme0n1 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:40.973 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:40.974 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:40.974 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:40.974 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.974 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.234 nvme0n1 00:33:41.234 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.234 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.234 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.234 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.234 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:41.495 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.496 09:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.757 nvme0n1 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.757 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.017 nvme0n1 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.017 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.278 nvme0n1 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.539 09:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.110 nvme0n1 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.110 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.371 nvme0n1 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.371 09:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.943 nvme0n1 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.943 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.515 nvme0n1 00:33:44.515 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.515 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.516 09:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.516 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.087 nvme0n1 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:45.087 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:45.088 09:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.031 nvme0n1 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.031 09:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.603 nvme0n1 00:33:46.603 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.603 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.603 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.603 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.603 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.603 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.866 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.436 nvme0n1 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.436 09:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.697 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.268 nvme0n1 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.268 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:48.529 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.530 09:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.102 nvme0n1 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.102 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.364 nvme0n1 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.364 09:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.625 nvme0n1 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.625 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.886 nvme0n1 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:49.886 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.887 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.147 nvme0n1 00:33:50.147 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.148 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.409 nvme0n1 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.409 09:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.681 nvme0n1 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.681 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.946 nvme0n1 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.946 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.208 nvme0n1 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:51.208 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.209 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.470 nvme0n1 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:51.470 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.471 09:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.732 nvme0n1 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.732 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.993 nvme0n1 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.993 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.994 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.255 nvme0n1 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.255 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.516 09:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.777 nvme0n1 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.777 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.778 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.039 nvme0n1 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:53.039 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.040 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.301 nvme0n1 00:33:53.301 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.301 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.301 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.301 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.301 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.301 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.562 09:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.830 nvme0n1 00:33:53.830 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.830 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.830 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.830 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.830 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.830 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.125 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.387 nvme0n1 00:33:54.387 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.387 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.387 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.387 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.387 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.387 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:54.648 09:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:54.649 09:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:54.649 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.649 09:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.910 nvme0n1 00:33:54.910 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.910 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.910 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.910 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.910 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.172 09:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.744 nvme0n1 00:33:55.744 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.744 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.744 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.744 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.744 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.745 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.005 nvme0n1 00:33:56.005 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.005 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.005 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.005 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.006 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.006 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.267 09:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.840 nvme0n1 00:33:56.840 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.840 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.840 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.840 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.840 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.840 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.102 09:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.674 nvme0n1 00:33:57.674 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.674 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.674 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.674 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.674 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.935 09:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.507 nvme0n1 00:33:58.508 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.508 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.508 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.508 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.508 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.508 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.769 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.341 nvme0n1 00:33:59.341 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:59.341 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.341 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.341 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:59.341 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.341 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.602 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:59.603 09:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.174 nvme0n1 00:34:00.174 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.174 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.174 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.174 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.174 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.174 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.435 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.436 nvme0n1 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.436 09:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.701 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.701 09:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.701 nvme0n1 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.701 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.702 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.703 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.966 nvme0n1 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.966 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.228 nvme0n1 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.228 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.489 nvme0n1 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.489 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.490 09:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.751 nvme0n1 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.751 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.752 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.014 nvme0n1 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.014 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.276 nvme0n1 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.276 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.537 nvme0n1 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.537 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.538 09:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.799 nvme0n1 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.799 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.060 nvme0n1 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:34:03.060 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.061 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.321 nvme0n1 00:34:03.321 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.321 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.321 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.321 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.321 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.583 09:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.844 nvme0n1 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.844 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.845 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.106 nvme0n1 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.106 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.107 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.368 nvme0n1 00:34:04.368 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.368 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.368 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.368 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.368 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.368 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.695 09:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.955 nvme0n1 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.955 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.216 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.477 nvme0n1 00:34:05.477 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.477 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.477 09:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.477 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.477 09:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.477 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:05.738 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.739 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.999 nvme0n1 00:34:05.999 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.999 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.999 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.999 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.999 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.999 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.260 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.260 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.261 09:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.521 nvme0n1 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.521 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.782 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.107 nvme0n1 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY3NWYyMzhkOTczN2U4ZThlN2I1ODBlM2UwM2ExMzADhMTa: 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: ]] 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDNkOTVjOGVkM2I1MzFkN2FjNWM2Mjg0ZWFiM2YxMjNkYjljMmNiYmE2MmU0ZmQxN2Q4OGNjYWRkZGU5NzdiOMR2CXI=: 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.107 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.369 09:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.942 nvme0n1 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.942 09:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.886 nvme0n1 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWIwMWU2MmQzZGJmMmM2MzkzMzllYTMyZDhlNjdlODcQGwlD: 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OThlMjBjYzAxY2RhMWRkOTEyODUwMWEzODIzNDIxNGGVLnzX: 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.886 09:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.829 nvme0n1 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzgzYjYwZjI4ZWIxODAzNjMxMGU4ODllYjkxNmIwOWNiYzM0MDEyN2E2OTgyN2ZjA44Iww==: 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: ]] 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNkNGQ3ZDhmZjJhNzMxOGNiNGY3NjliOWQyMzllZTM0FcH7: 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.829 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.830 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.401 nvme0n1 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU3NWM3ZGJjMTA4MjI1MDkwMjA5OTNhOTkyY2NiN2QwMDAwMTlhMTAyZWRmMjc0MGI4MTNhYmExZjQ2ZmFmNC2+R4E=: 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.401 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.402 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.402 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.402 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.402 09:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.402 09:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.402 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.402 09:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.342 nvme0n1 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkxOWQzMDU5MTljZDMzOGVjZjM1OTc5MTI2ZTljMWY5ZTA5YjRjNjljOGQ2MWI4axB7sQ==: 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4ZDE3NTU0MTk3ZjY0NTZlMDZlMDdiZWMyOGM2Y2ViNmNkYWM4NTQ5MDVlMWVihQHD6w==: 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:11.342 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.343 request: 00:34:11.343 { 00:34:11.343 "name": "nvme0", 00:34:11.343 "trtype": "tcp", 00:34:11.343 "traddr": "10.0.0.1", 00:34:11.343 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:11.343 "adrfam": "ipv4", 00:34:11.343 "trsvcid": "4420", 00:34:11.343 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:11.343 "method": "bdev_nvme_attach_controller", 00:34:11.343 "req_id": 1 00:34:11.343 } 00:34:11.343 Got JSON-RPC error response 00:34:11.343 response: 00:34:11.343 { 00:34:11.343 "code": -5, 00:34:11.343 "message": "Input/output error" 00:34:11.343 } 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.343 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.604 request: 00:34:11.604 { 00:34:11.604 "name": "nvme0", 00:34:11.604 "trtype": "tcp", 00:34:11.604 "traddr": "10.0.0.1", 00:34:11.604 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:11.604 "adrfam": "ipv4", 00:34:11.604 "trsvcid": "4420", 00:34:11.604 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:11.604 "dhchap_key": "key2", 00:34:11.604 "method": "bdev_nvme_attach_controller", 00:34:11.604 "req_id": 1 00:34:11.604 } 00:34:11.604 Got JSON-RPC error response 00:34:11.604 response: 00:34:11.604 { 00:34:11.604 "code": -5, 00:34:11.604 "message": "Input/output error" 00:34:11.604 } 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.604 09:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:11.604 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.605 request: 00:34:11.605 { 00:34:11.605 "name": "nvme0", 00:34:11.605 "trtype": "tcp", 00:34:11.605 "traddr": "10.0.0.1", 00:34:11.605 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:11.605 "adrfam": "ipv4", 00:34:11.605 "trsvcid": "4420", 00:34:11.605 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:11.605 "dhchap_key": "key1", 00:34:11.605 "dhchap_ctrlr_key": "ckey2", 00:34:11.605 "method": "bdev_nvme_attach_controller", 00:34:11.605 "req_id": 1 00:34:11.605 } 00:34:11.605 Got JSON-RPC error response 00:34:11.605 response: 00:34:11.605 { 00:34:11.605 "code": -5, 00:34:11.605 "message": "Input/output error" 00:34:11.605 } 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:11.605 rmmod nvme_tcp 00:34:11.605 rmmod nvme_fabrics 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2811849 ']' 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2811849 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 2811849 ']' 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 2811849 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:11.605 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2811849 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2811849' 00:34:11.866 killing process with pid 2811849 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 2811849 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 2811849 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.866 09:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:14.413 09:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:17.718 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:17.718 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:17.980 09:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tuu /tmp/spdk.key-null.JEG /tmp/spdk.key-sha256.Sp4 /tmp/spdk.key-sha384.bRL /tmp/spdk.key-sha512.gCD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:17.980 09:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:21.285 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:21.285 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:21.285 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:21.545 00:34:21.545 real 0m57.897s 00:34:21.545 user 0m51.723s 00:34:21.546 sys 0m14.729s 00:34:21.546 09:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:21.546 09:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.546 ************************************ 00:34:21.546 END TEST nvmf_auth_host 00:34:21.546 ************************************ 00:34:21.807 09:12:44 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:34:21.807 09:12:44 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:21.807 09:12:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:21.807 09:12:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:21.807 09:12:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:21.807 ************************************ 00:34:21.807 START TEST nvmf_digest 00:34:21.807 ************************************ 00:34:21.807 09:12:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:21.807 * Looking for test storage... 00:34:21.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:21.807 09:12:44 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.807 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:21.808 09:12:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:29.953 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:29.953 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:29.953 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:29.953 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:29.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:29.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:34:29.953 00:34:29.953 --- 10.0.0.2 ping statistics --- 00:34:29.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.953 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:29.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:29.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:34:29.953 00:34:29.953 --- 10.0.0.1 ping statistics --- 00:34:29.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.953 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:29.953 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.954 ************************************ 00:34:29.954 START TEST nvmf_digest_clean 00:34:29.954 ************************************ 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2828760 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2828760 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2828760 ']' 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:29.954 09:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:29.954 [2024-06-09 09:12:51.507642] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:29.954 [2024-06-09 09:12:51.507700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.954 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.954 [2024-06-09 09:12:51.577330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.954 [2024-06-09 09:12:51.651234] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.954 [2024-06-09 09:12:51.651273] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.954 [2024-06-09 09:12:51.651281] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.954 [2024-06-09 09:12:51.651287] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.954 [2024-06-09 09:12:51.651293] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.954 [2024-06-09 09:12:51.651317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:29.954 null0 00:34:29.954 [2024-06-09 09:12:52.386095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.954 [2024-06-09 09:12:52.410276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2829088 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2829088 /var/tmp/bperf.sock 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2829088 ']' 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:29.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:29.954 09:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:29.954 [2024-06-09 09:12:52.464075] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:29.954 [2024-06-09 09:12:52.464121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829088 ] 00:34:29.954 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.215 [2024-06-09 09:12:52.540007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.215 [2024-06-09 09:12:52.603901] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.786 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:30.786 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:30.786 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:30.786 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:30.786 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:31.047 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.047 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.308 nvme0n1 00:34:31.308 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:31.308 09:12:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:31.588 Running I/O for 2 seconds... 00:34:33.504 00:34:33.504 Latency(us) 00:34:33.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.504 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:33.504 nvme0n1 : 2.00 20727.66 80.97 0.00 0.00 6167.47 3003.73 17803.95 00:34:33.504 =================================================================================================================== 00:34:33.504 Total : 20727.66 80.97 0.00 0.00 6167.47 3003.73 17803.95 00:34:33.504 0 00:34:33.504 09:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:33.504 09:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:33.504 09:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:33.504 09:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:33.504 | select(.opcode=="crc32c") 00:34:33.504 | "\(.module_name) \(.executed)"' 00:34:33.504 09:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2829088 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2829088 ']' 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2829088 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2829088 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2829088' 00:34:33.766 killing process with pid 2829088 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2829088 00:34:33.766 Received shutdown signal, test time was about 2.000000 seconds 00:34:33.766 00:34:33.766 Latency(us) 00:34:33.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.766 =================================================================================================================== 00:34:33.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2829088 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2829776 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2829776 /var/tmp/bperf.sock 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2829776 ']' 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:33.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:33.766 09:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:33.766 [2024-06-09 09:12:56.321003] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:33.766 [2024-06-09 09:12:56.321057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829776 ] 00:34:33.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:33.766 Zero copy mechanism will not be used. 00:34:34.027 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.027 [2024-06-09 09:12:56.395904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.027 [2024-06-09 09:12:56.459405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.599 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:34.599 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:34.599 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:34.599 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:34.599 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:34.860 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:34.860 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.121 nvme0n1 00:34:35.121 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:35.121 09:12:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:35.121 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:35.121 Zero copy mechanism will not be used. 00:34:35.121 Running I/O for 2 seconds... 00:34:37.670 00:34:37.670 Latency(us) 00:34:37.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.670 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:37.670 nvme0n1 : 2.01 1873.56 234.19 0.00 0.00 8536.85 6389.76 15073.28 00:34:37.670 =================================================================================================================== 00:34:37.670 Total : 1873.56 234.19 0.00 0.00 8536.85 6389.76 15073.28 00:34:37.670 0 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:37.670 | select(.opcode=="crc32c") 00:34:37.670 | "\(.module_name) \(.executed)"' 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2829776 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2829776 ']' 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2829776 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2829776 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2829776' 00:34:37.670 killing process with pid 2829776 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2829776 00:34:37.670 Received shutdown signal, test time was about 2.000000 seconds 00:34:37.670 00:34:37.670 Latency(us) 00:34:37.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.670 =================================================================================================================== 00:34:37.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:37.670 09:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2829776 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2830459 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2830459 /var/tmp/bperf.sock 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2830459 ']' 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:37.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:37.670 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:37.670 [2024-06-09 09:13:00.063533] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:37.670 [2024-06-09 09:13:00.063611] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830459 ] 00:34:37.670 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.670 [2024-06-09 09:13:00.139778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.670 [2024-06-09 09:13:00.192683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.615 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:38.615 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:38.615 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:38.615 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:38.615 09:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:38.615 09:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:38.615 09:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:38.876 nvme0n1 00:34:38.876 09:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:38.876 09:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:38.876 Running I/O for 2 seconds... 00:34:40.791 00:34:40.791 Latency(us) 00:34:40.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.791 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:40.791 nvme0n1 : 2.01 21402.76 83.60 0.00 0.00 5972.62 2798.93 15837.87 00:34:40.791 =================================================================================================================== 00:34:40.791 Total : 21402.76 83.60 0.00 0.00 5972.62 2798.93 15837.87 00:34:40.791 0 00:34:40.791 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:40.791 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:40.791 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:40.791 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:40.791 | select(.opcode=="crc32c") 00:34:40.791 | "\(.module_name) \(.executed)"' 00:34:40.791 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2830459 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2830459 ']' 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2830459 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2830459 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2830459' 00:34:41.052 killing process with pid 2830459 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2830459 00:34:41.052 Received shutdown signal, test time was about 2.000000 seconds 00:34:41.052 00:34:41.052 Latency(us) 00:34:41.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.052 =================================================================================================================== 00:34:41.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:41.052 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2830459 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2831140 00:34:41.313 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2831140 /var/tmp/bperf.sock 00:34:41.314 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2831140 ']' 00:34:41.314 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:41.314 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:41.314 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:41.314 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:41.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:41.314 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:41.314 09:13:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:41.314 [2024-06-09 09:13:03.724918] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:41.314 [2024-06-09 09:13:03.724985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831140 ] 00:34:41.314 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:41.314 Zero copy mechanism will not be used. 00:34:41.314 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.314 [2024-06-09 09:13:03.800680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.314 [2024-06-09 09:13:03.852584] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.255 09:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:42.255 09:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:42.255 09:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:42.255 09:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:42.255 09:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:42.255 09:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.255 09:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.515 nvme0n1 00:34:42.515 09:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:42.515 09:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:42.774 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:42.774 Zero copy mechanism will not be used. 00:34:42.774 Running I/O for 2 seconds... 00:34:44.686 00:34:44.686 Latency(us) 00:34:44.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.686 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:44.686 nvme0n1 : 2.01 2240.15 280.02 0.00 0.00 7128.38 5679.79 22828.37 00:34:44.686 =================================================================================================================== 00:34:44.686 Total : 2240.15 280.02 0.00 0.00 7128.38 5679.79 22828.37 00:34:44.686 0 00:34:44.686 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:44.686 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:44.686 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:44.686 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:44.686 | select(.opcode=="crc32c") 00:34:44.686 | "\(.module_name) \(.executed)"' 00:34:44.686 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2831140 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2831140 ']' 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2831140 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2831140 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:44.947 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2831140' 00:34:44.947 killing process with pid 2831140 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2831140 00:34:44.948 Received shutdown signal, test time was about 2.000000 seconds 00:34:44.948 00:34:44.948 Latency(us) 00:34:44.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.948 =================================================================================================================== 00:34:44.948 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2831140 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2828760 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2828760 ']' 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2828760 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2828760 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2828760' 00:34:44.948 killing process with pid 2828760 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2828760 00:34:44.948 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2828760 00:34:45.209 00:34:45.209 real 0m16.186s 00:34:45.209 user 0m31.957s 00:34:45.209 sys 0m3.052s 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:45.209 ************************************ 00:34:45.209 END TEST nvmf_digest_clean 00:34:45.209 ************************************ 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:45.209 ************************************ 00:34:45.209 START TEST nvmf_digest_error 00:34:45.209 ************************************ 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2831964 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2831964 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2831964 ']' 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:45.209 09:13:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:45.469 [2024-06-09 09:13:07.771641] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:45.469 [2024-06-09 09:13:07.771688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.469 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.469 [2024-06-09 09:13:07.837920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.470 [2024-06-09 09:13:07.908407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.470 [2024-06-09 09:13:07.908444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.470 [2024-06-09 09:13:07.908452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.470 [2024-06-09 09:13:07.908458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.470 [2024-06-09 09:13:07.908463] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.470 [2024-06-09 09:13:07.908483] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.040 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:46.040 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:34:46.040 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.041 [2024-06-09 09:13:08.578410] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:46.041 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.301 null0 00:34:46.301 [2024-06-09 09:13:08.658780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.301 [2024-06-09 09:13:08.682973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.301 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:46.301 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:46.301 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:46.301 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:46.301 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:46.301 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:46.301 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2832195 00:34:46.302 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2832195 /var/tmp/bperf.sock 00:34:46.302 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2832195 ']' 00:34:46.302 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:46.302 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:46.302 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:46.302 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:46.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:46.302 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:46.302 09:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.302 [2024-06-09 09:13:08.735621] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:46.302 [2024-06-09 09:13:08.735666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832195 ] 00:34:46.302 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.302 [2024-06-09 09:13:08.811085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.563 [2024-06-09 09:13:08.864679] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.135 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:47.135 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:34:47.135 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:47.135 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:47.135 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:47.135 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:47.136 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:47.136 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:47.136 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.136 09:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.708 nvme0n1 00:34:47.708 09:13:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:47.708 09:13:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:47.708 09:13:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:47.708 09:13:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:47.708 09:13:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:47.708 09:13:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:47.708 Running I/O for 2 seconds... 00:34:47.708 [2024-06-09 09:13:10.165204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.165233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.165243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.708 [2024-06-09 09:13:10.178059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.178078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.178086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.708 [2024-06-09 09:13:10.190866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.190885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.190891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.708 [2024-06-09 09:13:10.203491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.203509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.203516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.708 [2024-06-09 09:13:10.215023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.215040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.215046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.708 [2024-06-09 09:13:10.225661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.225677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.225688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.708 [2024-06-09 09:13:10.239492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.239509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.239515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.708 [2024-06-09 09:13:10.251149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.251166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.251172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.708 [2024-06-09 09:13:10.264704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.708 [2024-06-09 09:13:10.264721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.708 [2024-06-09 09:13:10.264728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.276540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.276557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.276564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.288575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.288593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.288599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.300794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.300811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.300817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.312799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.312816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.312822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.325011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.325028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.325034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.337384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.337408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.337414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.349091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.349108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.349114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.361299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.361316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.361322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.373740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.373757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.373764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.386594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.386611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.386617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.397896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.397913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.397920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.410108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.410124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.410131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.423730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.423746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.423752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.434010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.434026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.434033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.448251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.448268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.448274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.460584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.460600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.460607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.472188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.472203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.472210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.484381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.484397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.484406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.496993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.497009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.497015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.508336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.508353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.508359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.970 [2024-06-09 09:13:10.521775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:47.970 [2024-06-09 09:13:10.521792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.970 [2024-06-09 09:13:10.521798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.533045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.533061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.533068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.545609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.545625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.545634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.557299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.557315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.557321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.568650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.568666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.568672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.582117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.582133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.582139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.594532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.594549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.594557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.606717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.606733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.606739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.617561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.617577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.617583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.631024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.631041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.631047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.644288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.644305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.644311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.654970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.654987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.654993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.668028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.668044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.668050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.680115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.680131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.680137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.692238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.692255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.692262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.704859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.704875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.704882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.716939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.716955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.716961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.728300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.728317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.728323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.740428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.740444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.231 [2024-06-09 09:13:10.740450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.231 [2024-06-09 09:13:10.754076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.231 [2024-06-09 09:13:10.754092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.232 [2024-06-09 09:13:10.754101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.232 [2024-06-09 09:13:10.765722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.232 [2024-06-09 09:13:10.765739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.232 [2024-06-09 09:13:10.765745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.232 [2024-06-09 09:13:10.777761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.232 [2024-06-09 09:13:10.777777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.232 [2024-06-09 09:13:10.777783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.501 [2024-06-09 09:13:10.789193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.501 [2024-06-09 09:13:10.789210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.501 [2024-06-09 09:13:10.789217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.501 [2024-06-09 09:13:10.801147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.501 [2024-06-09 09:13:10.801163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.501 [2024-06-09 09:13:10.801169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.501 [2024-06-09 09:13:10.814317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.501 [2024-06-09 09:13:10.814333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.501 [2024-06-09 09:13:10.814340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.501 [2024-06-09 09:13:10.826058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.501 [2024-06-09 09:13:10.826074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.501 [2024-06-09 09:13:10.826082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.501 [2024-06-09 09:13:10.837382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.501 [2024-06-09 09:13:10.837398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.501 [2024-06-09 09:13:10.837408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.501 [2024-06-09 09:13:10.850058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.850075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.850081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.862507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.862527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.862533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.874450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.874466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.874472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.887027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.887044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.887050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.899635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.899652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.899658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.911281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.911298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.911304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.923567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.923584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.923590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.935325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.935341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.935347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.948130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.948147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.948153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.960886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.960902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.960907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.972680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.972696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.972702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.985131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.985147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.985153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:10.996968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:10.996984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:10.996990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:11.009647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:11.009663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:11.009669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:11.020404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:11.020421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:11.020427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:11.033603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:11.033619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:11.033625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.502 [2024-06-09 09:13:11.045331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.502 [2024-06-09 09:13:11.045347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.502 [2024-06-09 09:13:11.045353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.058473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.058490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.058496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.070079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.070096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.070105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.082037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.082053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.082060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.094428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.094445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.094452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.106584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.106601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.106607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.118607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.118624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.118630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.130834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.130850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.130857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.142075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.142092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.142098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.155673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.155690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.155696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.168375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.168391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.168397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.181279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.181300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.181307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.192120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.192136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.192142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.204500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.204516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.204522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.216252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.216269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.216275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.228327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.228343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.228349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.241439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.241456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.241462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.253314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.253331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.253337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.265918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.265935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.265942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.277815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.277831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.277838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.290125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.290142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.290149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.302301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.302319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.302325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.314106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.314122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.314128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.326141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.326157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.326163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.338582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.338598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.338604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.350745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.825 [2024-06-09 09:13:11.350761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.825 [2024-06-09 09:13:11.350768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.825 [2024-06-09 09:13:11.363272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:48.826 [2024-06-09 09:13:11.363289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.826 [2024-06-09 09:13:11.363297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.086 [2024-06-09 09:13:11.374897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.086 [2024-06-09 09:13:11.374914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.086 [2024-06-09 09:13:11.374921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.086 [2024-06-09 09:13:11.387355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.086 [2024-06-09 09:13:11.387375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.086 [2024-06-09 09:13:11.387382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.086 [2024-06-09 09:13:11.399050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.086 [2024-06-09 09:13:11.399067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.086 [2024-06-09 09:13:11.399074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.086 [2024-06-09 09:13:11.411239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.086 [2024-06-09 09:13:11.411256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.086 [2024-06-09 09:13:11.411263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.086 [2024-06-09 09:13:11.422607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.086 [2024-06-09 09:13:11.422623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.086 [2024-06-09 09:13:11.422630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.086 [2024-06-09 09:13:11.436505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.086 [2024-06-09 09:13:11.436522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.086 [2024-06-09 09:13:11.436530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.086 [2024-06-09 09:13:11.447959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.086 [2024-06-09 09:13:11.447975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.086 [2024-06-09 09:13:11.447981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.459997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.460014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.460020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.471298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.471314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.471320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.484542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.484559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.484566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.498103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.498120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.498126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.509863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.509879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.509886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.521715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.521732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.521738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.533318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.533335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.533341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.546711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.546729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.546734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.558256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.558272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.558278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.570102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.570118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.570124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.582463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.582479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.582486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.593415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.593431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.593440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.607933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.607951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.607958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.620267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.620283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.620290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.631136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.631153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.631159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.087 [2024-06-09 09:13:11.642895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.087 [2024-06-09 09:13:11.642912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.087 [2024-06-09 09:13:11.642918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.348 [2024-06-09 09:13:11.655259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.348 [2024-06-09 09:13:11.655275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.348 [2024-06-09 09:13:11.655281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.348 [2024-06-09 09:13:11.668135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.348 [2024-06-09 09:13:11.668152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.348 [2024-06-09 09:13:11.668159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.348 [2024-06-09 09:13:11.679554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.348 [2024-06-09 09:13:11.679571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.348 [2024-06-09 09:13:11.679577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.348 [2024-06-09 09:13:11.692290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.348 [2024-06-09 09:13:11.692308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.348 [2024-06-09 09:13:11.692314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.348 [2024-06-09 09:13:11.705799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.348 [2024-06-09 09:13:11.705819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.348 [2024-06-09 09:13:11.705825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.348 [2024-06-09 09:13:11.717300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.348 [2024-06-09 09:13:11.717317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.717322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.729821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.729838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.729844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.741880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.741897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.741903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.753969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.753985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.753992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.765875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.765891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.765897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.778467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.778484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.778490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.789977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.789994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.790001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.802651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.802667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.802673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.814284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.814301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.814307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.826225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.826241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.826247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.838461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.838478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.838484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.850447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.850463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.850470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.862516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.862533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.862539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.874339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.874356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.874362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.887500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.887517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.887523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.349 [2024-06-09 09:13:11.899367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.349 [2024-06-09 09:13:11.899384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.349 [2024-06-09 09:13:11.899390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:11.912325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:11.912341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:11.912350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:11.924062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:11.924078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:11.924084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:11.935681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:11.935698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:11.935704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:11.947460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:11.947477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:11.947483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:11.960641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:11.960658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:11.960664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:11.972643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:11.972659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:11.972665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:11.984645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:11.984662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:11.984668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:11.996763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:11.996780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:11.996786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:12.009276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:12.009292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:12.009298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:12.020061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.610 [2024-06-09 09:13:12.020077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.610 [2024-06-09 09:13:12.020083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.610 [2024-06-09 09:13:12.033009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.033026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.033033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.045009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.045025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.045031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.057228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.057244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.057250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.068919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.068935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.068941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.081353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.081370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.081376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.093322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.093338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.093343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.105272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.105289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.105295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.118104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.118120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.118129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.129597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.129613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.129619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 [2024-06-09 09:13:12.141542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1492ee0) 00:34:49.611 [2024-06-09 09:13:12.141558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.611 [2024-06-09 09:13:12.141564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.611 00:34:49.611 Latency(us) 00:34:49.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.611 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:49.611 nvme0n1 : 2.00 20816.29 81.31 0.00 0.00 6142.94 3850.24 20753.07 00:34:49.611 =================================================================================================================== 00:34:49.611 Total : 20816.29 81.31 0.00 0.00 6142.94 3850.24 20753.07 00:34:49.611 0 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:49.872 | .driver_specific 00:34:49.872 | .nvme_error 00:34:49.872 | .status_code 00:34:49.872 | .command_transient_transport_error' 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2832195 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2832195 ']' 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2832195 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2832195 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2832195' 00:34:49.872 killing process with pid 2832195 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2832195 00:34:49.872 Received shutdown signal, test time was about 2.000000 seconds 00:34:49.872 00:34:49.872 Latency(us) 00:34:49.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.872 =================================================================================================================== 00:34:49.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.872 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2832195 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2832889 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2832889 /var/tmp/bperf.sock 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2832889 ']' 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:50.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:50.133 09:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.133 [2024-06-09 09:13:12.544482] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:50.133 [2024-06-09 09:13:12.544536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832889 ] 00:34:50.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:50.133 Zero copy mechanism will not be used. 00:34:50.133 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.133 [2024-06-09 09:13:12.618724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.134 [2024-06-09 09:13:12.671410] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.072 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.333 nvme0n1 00:34:51.333 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:51.333 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.333 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.333 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.333 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:51.333 09:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.333 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.333 Zero copy mechanism will not be used. 00:34:51.333 Running I/O for 2 seconds... 00:34:51.333 [2024-06-09 09:13:13.820435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.333 [2024-06-09 09:13:13.820465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.333 [2024-06-09 09:13:13.820473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.333 [2024-06-09 09:13:13.838804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.333 [2024-06-09 09:13:13.838824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.333 [2024-06-09 09:13:13.838831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.333 [2024-06-09 09:13:13.854781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.333 [2024-06-09 09:13:13.854800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.334 [2024-06-09 09:13:13.854807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.334 [2024-06-09 09:13:13.869659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.334 [2024-06-09 09:13:13.869677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.334 [2024-06-09 09:13:13.869684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.334 [2024-06-09 09:13:13.886630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.334 [2024-06-09 09:13:13.886647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.334 [2024-06-09 09:13:13.886654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.594 [2024-06-09 09:13:13.902901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:13.902918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:13.902925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:13.920477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:13.920494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:13.920501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:13.937787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:13.937808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:13.937815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:13.954478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:13.954495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:13.954501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:13.972691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:13.972708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:13.972714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:13.988130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:13.988148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:13.988154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.004880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.004898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.004905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.020110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.020127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.020134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.037606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.037622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.037629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.055291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.055307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.055314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.072653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.072669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.072679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.086614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.086631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.086637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.103022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.103038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.103045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.119558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.119574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.119581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.595 [2024-06-09 09:13:14.136972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.595 [2024-06-09 09:13:14.136989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.595 [2024-06-09 09:13:14.136995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.855 [2024-06-09 09:13:14.156293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.156310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.156317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.173489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.173505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.173512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.189310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.189327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.189334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.205427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.205444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.205450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.221330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.221350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.221356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.237261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.237278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.237284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.253770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.253787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.253793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.270236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.270253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.270259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.286646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.286663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.286669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.304586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.304602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.304608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.320769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.320785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.320791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.335879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.335896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.335903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.351765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.351782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.351787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.367984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.368006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.385479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.385495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.385501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.856 [2024-06-09 09:13:14.402106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:51.856 [2024-06-09 09:13:14.402124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.856 [2024-06-09 09:13:14.402131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.418233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.418251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.418257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.434770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.434787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.434794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.451807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.451825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.451831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.468277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.468294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.468300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.485387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.485409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.485416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.501020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.501036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.501045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.516891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.516909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.516915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.534431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.534453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.534460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.550596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.550613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.550619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.568342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.568359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.568365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.584818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.584835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.584841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.600907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.600924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.600931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.617650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.617667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.617673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.633954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.633971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.633978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.650606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.650623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.650630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.117 [2024-06-09 09:13:14.668062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.117 [2024-06-09 09:13:14.668078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.117 [2024-06-09 09:13:14.668085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.378 [2024-06-09 09:13:14.684008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.684025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.684032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.700738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.700755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.700761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.718174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.718192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.718198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.734571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.734588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.734595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.752626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.752644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.752650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.769289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.769306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.769312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.784453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.784470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.784482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.802121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.802138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.802144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.817770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.817787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.817793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.834210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.834227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.834233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.851269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.851286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.851292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.866701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.866718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.866724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.881853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.881870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.881876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.898506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.898522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.898528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.914858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.914875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.914881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.379 [2024-06-09 09:13:14.932353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.379 [2024-06-09 09:13:14.932372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.379 [2024-06-09 09:13:14.932378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.640 [2024-06-09 09:13:14.949590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.640 [2024-06-09 09:13:14.949608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.640 [2024-06-09 09:13:14.949614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.640 [2024-06-09 09:13:14.966707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.640 [2024-06-09 09:13:14.966724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.640 [2024-06-09 09:13:14.966730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.640 [2024-06-09 09:13:14.982604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.640 [2024-06-09 09:13:14.982621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.640 [2024-06-09 09:13:14.982627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.640 [2024-06-09 09:13:14.999471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.640 [2024-06-09 09:13:14.999488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.640 [2024-06-09 09:13:14.999495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.640 [2024-06-09 09:13:15.017055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.640 [2024-06-09 09:13:15.017073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.640 [2024-06-09 09:13:15.017079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.640 [2024-06-09 09:13:15.033462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.640 [2024-06-09 09:13:15.033479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.033485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.049994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.050011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.050017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.066881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.066898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.066904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.083105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.083122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.083128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.100516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.100533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.100540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.117521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.117538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.117545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.134366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.134384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.134390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.150155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.150171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.150177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.167599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.167616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.167622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.641 [2024-06-09 09:13:15.184765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.641 [2024-06-09 09:13:15.184782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.641 [2024-06-09 09:13:15.184788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.201864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.201881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.201887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.218235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.218252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.218261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.233717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.233733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.233739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.248356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.248373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.248379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.264322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.264339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.264345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.280829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.280846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.280852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.298434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.298451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.298457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.316375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.316392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.316398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.333190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.333207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.333213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.349517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.349534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.349540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.366208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.366225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.366231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.382932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.382949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.382955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.399586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.399602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.399608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.415980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.415997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.416003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.432990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.433007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.433013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.902 [2024-06-09 09:13:15.449724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:52.902 [2024-06-09 09:13:15.449741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.902 [2024-06-09 09:13:15.449747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.466573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.466590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.466596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.482762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.482779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.482785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.501145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.501161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.501171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.519948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.519964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.519970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.535558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.535575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.535581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.551976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.551993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.551999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.569998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.570014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.570020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.588441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.588457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.588463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.605618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.605635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.605641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.626200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.626217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.626223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.645746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.645762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.645768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.662582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.662603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.662610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.681150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.681166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.163 [2024-06-09 09:13:15.681173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.163 [2024-06-09 09:13:15.698828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.163 [2024-06-09 09:13:15.698844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.164 [2024-06-09 09:13:15.698851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.164 [2024-06-09 09:13:15.715351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.164 [2024-06-09 09:13:15.715368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.164 [2024-06-09 09:13:15.715374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.424 [2024-06-09 09:13:15.733106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.424 [2024-06-09 09:13:15.733123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.424 [2024-06-09 09:13:15.733129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.424 [2024-06-09 09:13:15.750845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.424 [2024-06-09 09:13:15.750861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.424 [2024-06-09 09:13:15.750867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.424 [2024-06-09 09:13:15.772362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.424 [2024-06-09 09:13:15.772378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.424 [2024-06-09 09:13:15.772385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.424 [2024-06-09 09:13:15.789538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1452f10) 00:34:53.424 [2024-06-09 09:13:15.789555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.425 [2024-06-09 09:13:15.789561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.425 00:34:53.425 Latency(us) 00:34:53.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.425 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:53.425 nvme0n1 : 2.00 1833.61 229.20 0.00 0.00 8721.04 5133.65 20643.84 00:34:53.425 =================================================================================================================== 00:34:53.425 Total : 1833.61 229.20 0.00 0.00 8721.04 5133.65 20643.84 00:34:53.425 0 00:34:53.425 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:53.425 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:53.425 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:53.425 | .driver_specific 00:34:53.425 | .nvme_error 00:34:53.425 | .status_code 00:34:53.425 | .command_transient_transport_error' 00:34:53.425 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:53.686 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:34:53.686 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2832889 00:34:53.686 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2832889 ']' 00:34:53.686 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2832889 00:34:53.686 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:34:53.686 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:53.686 09:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2832889 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2832889' 00:34:53.686 killing process with pid 2832889 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2832889 00:34:53.686 Received shutdown signal, test time was about 2.000000 seconds 00:34:53.686 00:34:53.686 Latency(us) 00:34:53.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.686 =================================================================================================================== 00:34:53.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2832889 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2833571 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2833571 /var/tmp/bperf.sock 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2833571 ']' 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:53.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:53.686 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:53.686 [2024-06-09 09:13:16.186211] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:53.686 [2024-06-09 09:13:16.186263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833571 ] 00:34:53.686 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.947 [2024-06-09 09:13:16.260980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.947 [2024-06-09 09:13:16.314546] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.519 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:54.519 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:34:54.519 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.519 09:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.780 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:54.780 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:54.780 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.780 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:54.780 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.780 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.041 nvme0n1 00:34:55.041 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:55.041 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:55.041 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.041 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:55.041 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:55.041 09:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.041 Running I/O for 2 seconds... 00:34:55.302 [2024-06-09 09:13:17.621970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190fc998 00:34:55.302 [2024-06-09 09:13:17.623316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.302 [2024-06-09 09:13:17.623343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:55.302 [2024-06-09 09:13:17.633741] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.302 [2024-06-09 09:13:17.634121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.302 [2024-06-09 09:13:17.634138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.302 [2024-06-09 09:13:17.645879] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.302 [2024-06-09 09:13:17.646305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.646322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.657946] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.658369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.658385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.670039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.670535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.670551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.682135] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.682675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.682691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.694191] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.694666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.694683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.706267] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.706701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.706716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.718297] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.718799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.718815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.730351] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.730780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.730796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.742430] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.742921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.742937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.754417] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.754806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.754821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.766471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.766911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.766926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.778524] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.779043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.779058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.790622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.791027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.791042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.802656] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.802946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.802962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.814679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.815102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.815118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.826741] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.827217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.827232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.838802] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.839213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.839229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.303 [2024-06-09 09:13:17.850805] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.303 [2024-06-09 09:13:17.851314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.303 [2024-06-09 09:13:17.851335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.862876] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.863178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.863194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.874956] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.875383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.875399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.886972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.887443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.887459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.899038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.899360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.899376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.911113] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.911616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.911632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.923111] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.923508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.923524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.935146] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.935575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.935590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.947142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.947641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.947657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.959211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.959693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.959709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.971235] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.971656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.971671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.983262] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.983772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.983788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:17.995308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:17.995629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:17.995645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:18.007394] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:18.007849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:18.007865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:18.019427] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:18.019748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:18.019764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:18.031450] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:18.031809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:18.031824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:18.043474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:18.043902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:18.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:18.055522] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:18.056039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:18.056054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:18.067538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:18.068017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.565 [2024-06-09 09:13:18.068032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.565 [2024-06-09 09:13:18.079590] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.565 [2024-06-09 09:13:18.080053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.566 [2024-06-09 09:13:18.080068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.566 [2024-06-09 09:13:18.091601] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.566 [2024-06-09 09:13:18.092049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.566 [2024-06-09 09:13:18.092065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.566 [2024-06-09 09:13:18.103639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.566 [2024-06-09 09:13:18.104042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.566 [2024-06-09 09:13:18.104057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.566 [2024-06-09 09:13:18.115679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.566 [2024-06-09 09:13:18.116121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.566 [2024-06-09 09:13:18.116136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.827 [2024-06-09 09:13:18.127704] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.827 [2024-06-09 09:13:18.128107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.827 [2024-06-09 09:13:18.128122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.827 [2024-06-09 09:13:18.139761] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.827 [2024-06-09 09:13:18.140235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.827 [2024-06-09 09:13:18.140250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.827 [2024-06-09 09:13:18.152006] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.827 [2024-06-09 09:13:18.152320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.827 [2024-06-09 09:13:18.152336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.827 [2024-06-09 09:13:18.164004] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.827 [2024-06-09 09:13:18.164416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.827 [2024-06-09 09:13:18.164434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.827 [2024-06-09 09:13:18.176086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.827 [2024-06-09 09:13:18.176398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.827 [2024-06-09 09:13:18.176416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.827 [2024-06-09 09:13:18.188139] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.188570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.188585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.200179] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.200605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.200621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.212305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.212658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.212673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.224329] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.224812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.224828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.236416] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.236943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.236958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.248468] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.248898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.248914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.260499] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.260819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.260834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.272517] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.273016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.273032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.284598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.285056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.285071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.296603] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.297123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.297139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.308695] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.309001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.309016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.320774] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.321097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.321112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.332833] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.333169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.333184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.344893] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.345393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.345412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.356933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.357333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.357348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.368979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.369418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.369433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.828 [2024-06-09 09:13:18.381031] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:55.828 [2024-06-09 09:13:18.381441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.828 [2024-06-09 09:13:18.381456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.393104] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.393613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.393628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.405158] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.405456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.405471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.417185] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.417608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.417623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.429166] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.429621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.429637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.441310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.441623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.441638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.453356] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.453796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.453812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.465395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.465899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.465914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.477502] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.478007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.478025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.489495] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.489819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.489835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.501558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.502006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.502021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.513563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.514043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.514059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.525631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.525950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.525965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.537628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.538108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.538123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.549709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.550150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.550166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.561696] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.562006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.562021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.573761] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.574108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.574123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.585809] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.586296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.586311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.597812] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.090 [2024-06-09 09:13:18.598325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.090 [2024-06-09 09:13:18.598340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.090 [2024-06-09 09:13:18.609850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.091 [2024-06-09 09:13:18.610198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.091 [2024-06-09 09:13:18.610212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.091 [2024-06-09 09:13:18.621865] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.091 [2024-06-09 09:13:18.622351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.091 [2024-06-09 09:13:18.622366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.091 [2024-06-09 09:13:18.633939] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.091 [2024-06-09 09:13:18.634442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.091 [2024-06-09 09:13:18.634456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.091 [2024-06-09 09:13:18.645994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.091 [2024-06-09 09:13:18.646335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.091 [2024-06-09 09:13:18.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.352 [2024-06-09 09:13:18.658083] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.352 [2024-06-09 09:13:18.658523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-06-09 09:13:18.658538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.352 [2024-06-09 09:13:18.670097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.352 [2024-06-09 09:13:18.670586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-06-09 09:13:18.670600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.352 [2024-06-09 09:13:18.682164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.352 [2024-06-09 09:13:18.682577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-06-09 09:13:18.682593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.352 [2024-06-09 09:13:18.694194] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.352 [2024-06-09 09:13:18.694626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-06-09 09:13:18.694641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.352 [2024-06-09 09:13:18.706239] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.352 [2024-06-09 09:13:18.706693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-06-09 09:13:18.706708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.352 [2024-06-09 09:13:18.718306] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.352 [2024-06-09 09:13:18.718743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-06-09 09:13:18.718758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.352 [2024-06-09 09:13:18.730322] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.730763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.730777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.742418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.742756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.742771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.754477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.754931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.754946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.766468] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.766922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.766936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.778581] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.779041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.779055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.790747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.791106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.791123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.802794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.803231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.803246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.814896] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.815400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.815419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.826923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.827358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.827372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.838979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.839304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.839319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.851009] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.851441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.851457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.863031] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.863373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.863389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.875113] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.875425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.875440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.887117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.887529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.887544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.353 [2024-06-09 09:13:18.899297] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.353 [2024-06-09 09:13:18.899802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-06-09 09:13:18.899818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.614 [2024-06-09 09:13:18.911320] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.614 [2024-06-09 09:13:18.911793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.614 [2024-06-09 09:13:18.911808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.614 [2024-06-09 09:13:18.923389] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.614 [2024-06-09 09:13:18.923715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.614 [2024-06-09 09:13:18.923730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.614 [2024-06-09 09:13:18.935514] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.614 [2024-06-09 09:13:18.935960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:18.935975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:18.947540] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:18.947998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:18.948013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:18.959526] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:18.959845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:18.959861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:18.971543] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:18.971951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:18.971966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:18.983594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:18.983924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:18.983939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:18.995621] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:18.996045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:18.996061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.007692] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.008186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.008202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.019727] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.020064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.020079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.031720] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.032023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.032038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.043734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.044099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.044115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.055753] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.056226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.056241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.067826] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.068265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.068280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.079831] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.080299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.080315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.091854] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.092350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.092366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.103894] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.104327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.104344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.115959] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.116406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.116422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.127970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.128294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.128309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.140049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.140469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.140486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.152275] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.152689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.152704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.615 [2024-06-09 09:13:19.164339] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.615 [2024-06-09 09:13:19.164748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.615 [2024-06-09 09:13:19.164764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.176347] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.176669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.176684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.188391] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.188815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.188830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.200436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.200866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.200881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.212447] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.212867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.212884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.224509] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.224948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.224963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.236521] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.236989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.237005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.248572] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.248876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.248892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.260560] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.260944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.260959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.272603] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.273090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.273105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.284639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.285126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.285141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.296689] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.297052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.297068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.308741] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.309220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.309235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.320788] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.321239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.321255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.332826] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.333318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.333333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.344832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.345154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.345170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.356852] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.357235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.357250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.368881] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.369176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.369191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.380924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.381450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.381466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.393013] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.393458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.393474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.405009] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.405456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.405472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.417015] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.417452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.417467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.877 [2024-06-09 09:13:19.429058] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:56.877 [2024-06-09 09:13:19.429519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.877 [2024-06-09 09:13:19.429534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.138 [2024-06-09 09:13:19.441111] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.138 [2024-06-09 09:13:19.441420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.138 [2024-06-09 09:13:19.441436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.138 [2024-06-09 09:13:19.453165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.138 [2024-06-09 09:13:19.453468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.138 [2024-06-09 09:13:19.453482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.138 [2024-06-09 09:13:19.465138] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.465644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.465659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.477169] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.477556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.477571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.489186] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.489585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.489601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.501186] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.501494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.501510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.513204] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.513516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.513531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.525250] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.525556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.525574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.537285] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.537698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.537713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.549261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.549564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.549579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.561302] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.561794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.561809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.573337] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.573765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.573780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.585385] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.585814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.585829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 [2024-06-09 09:13:19.597364] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465370) with pdu=0x2000190f0bc0 00:34:57.139 [2024-06-09 09:13:19.597867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.139 [2024-06-09 09:13:19.597882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.139 00:34:57.139 Latency(us) 00:34:57.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.139 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:57.139 nvme0n1 : 2.01 21076.43 82.33 0.00 0.00 6061.55 3604.48 18786.99 00:34:57.139 =================================================================================================================== 00:34:57.139 Total : 21076.43 82.33 0.00 0.00 6061.55 3604.48 18786.99 00:34:57.139 0 00:34:57.139 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:57.139 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:57.139 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:57.139 | .driver_specific 00:34:57.139 | .nvme_error 00:34:57.139 | .status_code 00:34:57.139 | .command_transient_transport_error' 00:34:57.139 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2833571 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2833571 ']' 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2833571 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2833571 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2833571' 00:34:57.400 killing process with pid 2833571 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2833571 00:34:57.400 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.400 00:34:57.400 Latency(us) 00:34:57.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.400 =================================================================================================================== 00:34:57.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.400 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2833571 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2834264 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2834264 /var/tmp/bperf.sock 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2834264 ']' 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:57.661 09:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.661 [2024-06-09 09:13:20.010735] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:57.661 [2024-06-09 09:13:20.010784] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834264 ] 00:34:57.661 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.661 Zero copy mechanism will not be used. 00:34:57.661 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.661 [2024-06-09 09:13:20.100001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.661 [2024-06-09 09:13:20.153703] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.233 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:58.233 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:34:58.233 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.233 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.496 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:58.496 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.496 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.496 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.496 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.496 09:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.757 nvme0n1 00:34:59.018 09:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:59.018 09:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.018 09:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.018 09:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.018 09:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:59.018 09:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:59.018 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:59.018 Zero copy mechanism will not be used. 00:34:59.018 Running I/O for 2 seconds... 00:34:59.018 [2024-06-09 09:13:21.450629] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.451042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.451070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.018 [2024-06-09 09:13:21.467785] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.468140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.468159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.018 [2024-06-09 09:13:21.481324] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.481693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.481711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.018 [2024-06-09 09:13:21.494357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.494611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.494627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.018 [2024-06-09 09:13:21.507701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.507960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.507976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.018 [2024-06-09 09:13:21.521290] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.521528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.521543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.018 [2024-06-09 09:13:21.535612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.535854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.535868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.018 [2024-06-09 09:13:21.549972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.550524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.550542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.018 [2024-06-09 09:13:21.564924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.018 [2024-06-09 09:13:21.565187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.018 [2024-06-09 09:13:21.565204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.279 [2024-06-09 09:13:21.579939] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.279 [2024-06-09 09:13:21.580441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.279 [2024-06-09 09:13:21.580458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.279 [2024-06-09 09:13:21.593937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.279 [2024-06-09 09:13:21.594295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.279 [2024-06-09 09:13:21.594312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.279 [2024-06-09 09:13:21.607765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.279 [2024-06-09 09:13:21.608116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.608135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.620792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.621235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.621252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.634112] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.634436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.634453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.648121] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.648502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.648519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.662742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.663060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.663077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.677173] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.677639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.677657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.692787] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.693265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.693283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.707122] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.707443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.707461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.720746] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.721138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.721155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.735091] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.735474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.735491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.749720] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.750101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.750118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.764951] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.765353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.765369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.780785] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.781037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.781054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.795224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.795480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.795496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.809792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.810155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.810172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.280 [2024-06-09 09:13:21.824386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.280 [2024-06-09 09:13:21.824730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.280 [2024-06-09 09:13:21.824748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.838719] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.839100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.839117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.853732] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.853983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.854000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.867228] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.867486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.867502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.882147] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.882573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.882590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.896222] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.896604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.896621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.910944] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.911205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.911221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.924761] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.925016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.925034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.938225] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.938491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.938508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.952626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.952967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.952984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.968121] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.968595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.968612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.983368] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.983704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.983722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:21.997953] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:21.998252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:21.998269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:22.013207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:22.013652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:22.013668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:22.026724] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:22.026980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:22.026997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:22.040580] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:22.040921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:22.040938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:22.055285] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:22.055541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:22.055557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:22.069158] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:22.069521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:22.069538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:22.083952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:22.084342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:22.084357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.542 [2024-06-09 09:13:22.099116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.542 [2024-06-09 09:13:22.099501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.542 [2024-06-09 09:13:22.099518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.804 [2024-06-09 09:13:22.114446] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.804 [2024-06-09 09:13:22.114799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.804 [2024-06-09 09:13:22.114816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.804 [2024-06-09 09:13:22.127047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.804 [2024-06-09 09:13:22.127435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.804 [2024-06-09 09:13:22.127452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.804 [2024-06-09 09:13:22.142057] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.804 [2024-06-09 09:13:22.142442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.804 [2024-06-09 09:13:22.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.804 [2024-06-09 09:13:22.156084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.804 [2024-06-09 09:13:22.156395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.156417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.171023] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.171472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.171490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.186031] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.186415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.186432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.198931] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.199303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.199319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.213340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.213642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.213659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.228460] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.228718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.228734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.242833] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.243240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.243256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.257002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.257255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.257271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.270530] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.270760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.270775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.285051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.285358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.285375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.300367] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.300647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.300672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.314629] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.314876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.314891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.328824] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.329211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.329227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.343669] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.343956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.343973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.805 [2024-06-09 09:13:22.359032] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:34:59.805 [2024-06-09 09:13:22.359379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.805 [2024-06-09 09:13:22.359397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.374005] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.374317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.374333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.389022] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.389419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.389435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.402841] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.403030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.403045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.418437] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.418697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.418712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.431103] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.431355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.431371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.444869] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.445122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.445138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.458160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.458394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.458411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.472906] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.473301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.473317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.486324] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.486577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.486593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.499717] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.499995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.500012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.513330] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.066 [2024-06-09 09:13:22.513594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.066 [2024-06-09 09:13:22.513611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.066 [2024-06-09 09:13:22.526212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.067 [2024-06-09 09:13:22.526467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.067 [2024-06-09 09:13:22.526490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.067 [2024-06-09 09:13:22.540128] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.067 [2024-06-09 09:13:22.540495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.067 [2024-06-09 09:13:22.540512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.067 [2024-06-09 09:13:22.554433] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.067 [2024-06-09 09:13:22.554880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.067 [2024-06-09 09:13:22.554896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.067 [2024-06-09 09:13:22.569903] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.067 [2024-06-09 09:13:22.570250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.067 [2024-06-09 09:13:22.570267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.067 [2024-06-09 09:13:22.583652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.067 [2024-06-09 09:13:22.583796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.067 [2024-06-09 09:13:22.583810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.067 [2024-06-09 09:13:22.596625] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.067 [2024-06-09 09:13:22.596961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.067 [2024-06-09 09:13:22.596980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.067 [2024-06-09 09:13:22.610132] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.067 [2024-06-09 09:13:22.610393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.067 [2024-06-09 09:13:22.610413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.624437] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.624790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.624807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.639664] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.639994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.640010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.653209] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.653584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.653601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.667170] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.667461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.667477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.681587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.682090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.682106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.696104] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.696374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.696390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.709511] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.709857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.709873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.724270] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.724533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.724549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.738715] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.738862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.738876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.753043] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.753523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.753541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.768724] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.769121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.769138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.782998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.783333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.783349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.798092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.798343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-06-09 09:13:22.798359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-06-09 09:13:22.812616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.329 [2024-06-09 09:13:22.812891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.330 [2024-06-09 09:13:22.812908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.330 [2024-06-09 09:13:22.827472] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.330 [2024-06-09 09:13:22.827703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.330 [2024-06-09 09:13:22.827719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.330 [2024-06-09 09:13:22.842573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.330 [2024-06-09 09:13:22.842876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.330 [2024-06-09 09:13:22.842893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.330 [2024-06-09 09:13:22.857057] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.330 [2024-06-09 09:13:22.857494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.330 [2024-06-09 09:13:22.857511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.330 [2024-06-09 09:13:22.871714] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.330 [2024-06-09 09:13:22.871981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.330 [2024-06-09 09:13:22.871998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.330 [2024-06-09 09:13:22.886680] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:22.887066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:22.887083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:22.901658] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:22.902043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:22.902060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:22.915762] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:22.916063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:22.916079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:22.930120] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:22.930427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:22.930444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:22.944965] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:22.945321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:22.945338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:22.959594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:22.959960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:22.959976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:22.972751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:22.973098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:22.973116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:22.987368] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:22.987716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:22.987733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:23.002613] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:23.002890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:23.002907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:23.016880] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:23.017177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:23.017194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:23.030170] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:23.030489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:23.030507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:23.044173] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:23.044460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:23.044475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:23.059220] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:23.059598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:23.059614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:23.074379] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.593 [2024-06-09 09:13:23.074683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.593 [2024-06-09 09:13:23.074700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.593 [2024-06-09 09:13:23.089395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.594 [2024-06-09 09:13:23.089693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.594 [2024-06-09 09:13:23.089708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.594 [2024-06-09 09:13:23.104288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.594 [2024-06-09 09:13:23.104555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.594 [2024-06-09 09:13:23.104571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.594 [2024-06-09 09:13:23.117040] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.594 [2024-06-09 09:13:23.117317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.594 [2024-06-09 09:13:23.117333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.594 [2024-06-09 09:13:23.130576] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.594 [2024-06-09 09:13:23.130835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.594 [2024-06-09 09:13:23.130852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.594 [2024-06-09 09:13:23.144631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.594 [2024-06-09 09:13:23.144926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.594 [2024-06-09 09:13:23.144943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.158110] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.158360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.158376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.172014] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.172200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.172216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.185465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.185803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.185819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.200348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.200656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.200673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.214587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.214877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.214892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.228362] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.228722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.228739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.242365] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.242745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.242762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.256529] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.256924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.256941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.271901] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.272273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.272290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.287395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.287654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.287670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.301740] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.302032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.302049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.315659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.315948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.315965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.329817] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.330068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.330093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.344752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.345127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.345146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.359096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.359509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.359526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.374306] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.374715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.374731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.389833] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.390261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.390278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.856 [2024-06-09 09:13:23.403378] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:00.856 [2024-06-09 09:13:23.403644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.856 [2024-06-09 09:13:23.403661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.117 [2024-06-09 09:13:23.417339] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1465700) with pdu=0x2000190fef90 00:35:01.117 [2024-06-09 09:13:23.417567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.117 [2024-06-09 09:13:23.417583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.117 00:35:01.117 Latency(us) 00:35:01.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.117 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:01.117 nvme0n1 : 2.01 2137.31 267.16 0.00 0.00 7470.73 5570.56 28617.39 00:35:01.117 =================================================================================================================== 00:35:01.117 Total : 2137.31 267.16 0.00 0.00 7470.73 5570.56 28617.39 00:35:01.117 0 00:35:01.117 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:01.117 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:01.117 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:01.117 | .driver_specific 00:35:01.117 | .nvme_error 00:35:01.117 | .status_code 00:35:01.117 | .command_transient_transport_error' 00:35:01.117 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:01.117 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:35:01.117 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2834264 00:35:01.117 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2834264 ']' 00:35:01.117 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2834264 00:35:01.118 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:35:01.118 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:01.118 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2834264 00:35:01.118 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:01.118 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:01.118 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2834264' 00:35:01.118 killing process with pid 2834264 00:35:01.118 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2834264 00:35:01.118 Received shutdown signal, test time was about 2.000000 seconds 00:35:01.118 00:35:01.118 Latency(us) 00:35:01.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.118 =================================================================================================================== 00:35:01.118 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:01.118 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2834264 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2831964 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2831964 ']' 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2831964 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2831964 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2831964' 00:35:01.379 killing process with pid 2831964 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2831964 00:35:01.379 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2831964 00:35:01.640 00:35:01.640 real 0m16.251s 00:35:01.640 user 0m32.095s 00:35:01.640 sys 0m3.078s 00:35:01.640 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:01.640 09:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.640 ************************************ 00:35:01.640 END TEST nvmf_digest_error 00:35:01.640 ************************************ 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:01.640 rmmod nvme_tcp 00:35:01.640 rmmod nvme_fabrics 00:35:01.640 rmmod nvme_keyring 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2831964 ']' 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2831964 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 2831964 ']' 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 2831964 00:35:01.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2831964) - No such process 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 2831964 is not found' 00:35:01.640 Process with pid 2831964 is not found 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:01.640 09:13:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.213 09:13:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:04.213 00:35:04.213 real 0m41.982s 00:35:04.213 user 1m6.127s 00:35:04.213 sys 0m11.520s 00:35:04.213 09:13:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:04.213 09:13:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:04.213 ************************************ 00:35:04.213 END TEST nvmf_digest 00:35:04.213 ************************************ 00:35:04.213 09:13:26 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:35:04.213 09:13:26 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:35:04.213 09:13:26 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:35:04.213 09:13:26 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:04.213 09:13:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:04.213 09:13:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:04.213 09:13:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.213 ************************************ 00:35:04.213 START TEST nvmf_bdevperf 00:35:04.213 ************************************ 00:35:04.213 09:13:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:04.213 * Looking for test storage... 00:35:04.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:04.214 09:13:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:10.805 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:10.805 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:10.805 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:10.805 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:10.806 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:10.806 09:13:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:10.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:10.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:35:10.806 00:35:10.806 --- 10.0.0.2 ping statistics --- 00:35:10.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.806 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:10.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:10.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:35:10.806 00:35:10.806 --- 10.0.0.1 ping statistics --- 00:35:10.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.806 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:10.806 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2839263 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2839263 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2839263 ']' 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:11.067 09:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:11.067 [2024-06-09 09:13:33.436197] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:11.067 [2024-06-09 09:13:33.436247] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.067 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.067 [2024-06-09 09:13:33.518654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:11.067 [2024-06-09 09:13:33.583342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.067 [2024-06-09 09:13:33.583378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.067 [2024-06-09 09:13:33.583386] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.067 [2024-06-09 09:13:33.583393] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.067 [2024-06-09 09:13:33.583399] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.067 [2024-06-09 09:13:33.583602] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:11.067 [2024-06-09 09:13:33.583975] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:11.067 [2024-06-09 09:13:33.583976] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.011 [2024-06-09 09:13:34.295348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.011 Malloc0 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.011 [2024-06-09 09:13:34.358754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:12.011 { 00:35:12.011 "params": { 00:35:12.011 "name": "Nvme$subsystem", 00:35:12.011 "trtype": "$TEST_TRANSPORT", 00:35:12.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:12.011 "adrfam": "ipv4", 00:35:12.011 "trsvcid": "$NVMF_PORT", 00:35:12.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:12.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:12.011 "hdgst": ${hdgst:-false}, 00:35:12.011 "ddgst": ${ddgst:-false} 00:35:12.011 }, 00:35:12.011 "method": "bdev_nvme_attach_controller" 00:35:12.011 } 00:35:12.011 EOF 00:35:12.011 )") 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:12.011 09:13:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:12.011 "params": { 00:35:12.011 "name": "Nvme1", 00:35:12.011 "trtype": "tcp", 00:35:12.011 "traddr": "10.0.0.2", 00:35:12.011 "adrfam": "ipv4", 00:35:12.011 "trsvcid": "4420", 00:35:12.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:12.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:12.011 "hdgst": false, 00:35:12.011 "ddgst": false 00:35:12.011 }, 00:35:12.011 "method": "bdev_nvme_attach_controller" 00:35:12.011 }' 00:35:12.011 [2024-06-09 09:13:34.416898] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:12.011 [2024-06-09 09:13:34.416946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839295 ] 00:35:12.011 EAL: No free 2048 kB hugepages reported on node 1 00:35:12.011 [2024-06-09 09:13:34.475553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.011 [2024-06-09 09:13:34.540134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.272 Running I/O for 1 seconds... 00:35:13.659 00:35:13.659 Latency(us) 00:35:13.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.659 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:13.659 Verification LBA range: start 0x0 length 0x4000 00:35:13.659 Nvme1n1 : 1.01 9167.12 35.81 0.00 0.00 13888.04 2839.89 20862.29 00:35:13.659 =================================================================================================================== 00:35:13.659 Total : 9167.12 35.81 0.00 0.00 13888.04 2839.89 20862.29 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2839629 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:13.659 { 00:35:13.659 "params": { 00:35:13.659 "name": "Nvme$subsystem", 00:35:13.659 "trtype": "$TEST_TRANSPORT", 00:35:13.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.659 "adrfam": "ipv4", 00:35:13.659 "trsvcid": "$NVMF_PORT", 00:35:13.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.659 "hdgst": ${hdgst:-false}, 00:35:13.659 "ddgst": ${ddgst:-false} 00:35:13.659 }, 00:35:13.659 "method": "bdev_nvme_attach_controller" 00:35:13.659 } 00:35:13.659 EOF 00:35:13.659 )") 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:13.659 09:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:13.659 "params": { 00:35:13.659 "name": "Nvme1", 00:35:13.659 "trtype": "tcp", 00:35:13.659 "traddr": "10.0.0.2", 00:35:13.659 "adrfam": "ipv4", 00:35:13.659 "trsvcid": "4420", 00:35:13.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:13.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:13.659 "hdgst": false, 00:35:13.659 "ddgst": false 00:35:13.659 }, 00:35:13.659 "method": "bdev_nvme_attach_controller" 00:35:13.659 }' 00:35:13.659 [2024-06-09 09:13:35.992536] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:13.659 [2024-06-09 09:13:35.992591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839629 ] 00:35:13.659 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.659 [2024-06-09 09:13:36.051432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.659 [2024-06-09 09:13:36.113661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.920 Running I/O for 15 seconds... 00:35:16.470 09:13:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2839263 00:35:16.470 09:13:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:16.470 [2024-06-09 09:13:38.960536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.470 [2024-06-09 09:13:38.960579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.470 [2024-06-09 09:13:38.960895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.470 [2024-06-09 09:13:38.960903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.960912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.960919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.960928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.960936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.960945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.960953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.960962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.960969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.960979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.960987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.960996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.471 [2024-06-09 09:13:38.961588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.471 [2024-06-09 09:13:38.961597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.961803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.961985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.961994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.962001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.962017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.962033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.962050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.472 [2024-06-09 09:13:38.962067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.472 [2024-06-09 09:13:38.962229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.472 [2024-06-09 09:13:38.962238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.473 [2024-06-09 09:13:38.962245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.473 [2024-06-09 09:13:38.962262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.473 [2024-06-09 09:13:38.962595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.473 [2024-06-09 09:13:38.962828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20efe70 is same with the state(5) to be set 00:35:16.473 [2024-06-09 09:13:38.962848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:16.473 [2024-06-09 09:13:38.962854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:16.473 [2024-06-09 09:13:38.962861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111128 len:8 PRP1 0x0 PRP2 0x0 00:35:16.473 [2024-06-09 09:13:38.962868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.473 [2024-06-09 09:13:38.962905] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20efe70 was disconnected and freed. reset controller. 00:35:16.473 [2024-06-09 09:13:38.966417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.473 [2024-06-09 09:13:38.966463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.473 [2024-06-09 09:13:38.967415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.473 [2024-06-09 09:13:38.967432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.473 [2024-06-09 09:13:38.967440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.473 [2024-06-09 09:13:38.967660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.473 [2024-06-09 09:13:38.967879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.473 [2024-06-09 09:13:38.967888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.473 [2024-06-09 09:13:38.967896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.473 [2024-06-09 09:13:38.971443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.473 [2024-06-09 09:13:38.980651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.473 [2024-06-09 09:13:38.981364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.473 [2024-06-09 09:13:38.981410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.474 [2024-06-09 09:13:38.981421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.474 [2024-06-09 09:13:38.981666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.474 [2024-06-09 09:13:38.981890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.474 [2024-06-09 09:13:38.981899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.474 [2024-06-09 09:13:38.981906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.474 [2024-06-09 09:13:38.985456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.474 [2024-06-09 09:13:38.994437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.474 [2024-06-09 09:13:38.995241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.474 [2024-06-09 09:13:38.995278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.474 [2024-06-09 09:13:38.995288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.474 [2024-06-09 09:13:38.995535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.474 [2024-06-09 09:13:38.995758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.474 [2024-06-09 09:13:38.995766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.474 [2024-06-09 09:13:38.995774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.474 [2024-06-09 09:13:38.999325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.474 [2024-06-09 09:13:39.008311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.474 [2024-06-09 09:13:39.008952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.474 [2024-06-09 09:13:39.008989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.474 [2024-06-09 09:13:39.009000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.474 [2024-06-09 09:13:39.009238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.474 [2024-06-09 09:13:39.009472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.474 [2024-06-09 09:13:39.009482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.474 [2024-06-09 09:13:39.009490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.474 [2024-06-09 09:13:39.013033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.474 [2024-06-09 09:13:39.022228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.474 [2024-06-09 09:13:39.022981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.474 [2024-06-09 09:13:39.023018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.474 [2024-06-09 09:13:39.023028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.474 [2024-06-09 09:13:39.023267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.474 [2024-06-09 09:13:39.023500] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.474 [2024-06-09 09:13:39.023509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.474 [2024-06-09 09:13:39.023521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.736 [2024-06-09 09:13:39.027065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.736 [2024-06-09 09:13:39.036052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.736 [2024-06-09 09:13:39.036839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.736 [2024-06-09 09:13:39.036876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.736 [2024-06-09 09:13:39.036887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.736 [2024-06-09 09:13:39.037125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.736 [2024-06-09 09:13:39.037347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.736 [2024-06-09 09:13:39.037356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.736 [2024-06-09 09:13:39.037364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.736 [2024-06-09 09:13:39.040919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.736 [2024-06-09 09:13:39.049904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.736 [2024-06-09 09:13:39.050728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.736 [2024-06-09 09:13:39.050765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.736 [2024-06-09 09:13:39.050776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.736 [2024-06-09 09:13:39.051014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.736 [2024-06-09 09:13:39.051237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.736 [2024-06-09 09:13:39.051245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.736 [2024-06-09 09:13:39.051253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.736 [2024-06-09 09:13:39.054807] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.736 [2024-06-09 09:13:39.063814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.736 [2024-06-09 09:13:39.064710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.736 [2024-06-09 09:13:39.064747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.736 [2024-06-09 09:13:39.064758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.736 [2024-06-09 09:13:39.064996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.736 [2024-06-09 09:13:39.065219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.736 [2024-06-09 09:13:39.065227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.736 [2024-06-09 09:13:39.065234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.736 [2024-06-09 09:13:39.068785] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.736 [2024-06-09 09:13:39.077772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.736 [2024-06-09 09:13:39.078518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.736 [2024-06-09 09:13:39.078560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.736 [2024-06-09 09:13:39.078571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.736 [2024-06-09 09:13:39.078809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.736 [2024-06-09 09:13:39.079032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.079040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.079047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.082600] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.091582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.092375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.092418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.092429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.092667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.092889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.092898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.092905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.096460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.105449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.106268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.106304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.106315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.106560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.106784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.106793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.106800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.110346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.119332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.120062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.120099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.120110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.120347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.120583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.120592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.120599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.124142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.133126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.133873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.133910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.133921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.134160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.134382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.134391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.134398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.137949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.147154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.148013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.148051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.148061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.148299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.148529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.148539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.148546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.152089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.161076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.161752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.161788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.161799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.162036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.162259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.162267] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.162275] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.165834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.175026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.175826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.175863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.175874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.176112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.176334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.176342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.176350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.179901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.188887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.189710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.189747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.189759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.189998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.190221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.190229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.190236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.193788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.202783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.203506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.203544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.203556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.203797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.204019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.204028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.737 [2024-06-09 09:13:39.204035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.737 [2024-06-09 09:13:39.207585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.737 [2024-06-09 09:13:39.216574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.737 [2024-06-09 09:13:39.217391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.737 [2024-06-09 09:13:39.217435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.737 [2024-06-09 09:13:39.217450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.737 [2024-06-09 09:13:39.217688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.737 [2024-06-09 09:13:39.217910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.737 [2024-06-09 09:13:39.217919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.738 [2024-06-09 09:13:39.217927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.738 [2024-06-09 09:13:39.221476] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.738 [2024-06-09 09:13:39.230468] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.738 [2024-06-09 09:13:39.231250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.738 [2024-06-09 09:13:39.231287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.738 [2024-06-09 09:13:39.231297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.738 [2024-06-09 09:13:39.231541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.738 [2024-06-09 09:13:39.231764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.738 [2024-06-09 09:13:39.231773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.738 [2024-06-09 09:13:39.231780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.738 [2024-06-09 09:13:39.235326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.738 [2024-06-09 09:13:39.244308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.738 [2024-06-09 09:13:39.245139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.738 [2024-06-09 09:13:39.245176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.738 [2024-06-09 09:13:39.245187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.738 [2024-06-09 09:13:39.245432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.738 [2024-06-09 09:13:39.245655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.738 [2024-06-09 09:13:39.245663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.738 [2024-06-09 09:13:39.245671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.738 [2024-06-09 09:13:39.249212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.738 [2024-06-09 09:13:39.258231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.738 [2024-06-09 09:13:39.259031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.738 [2024-06-09 09:13:39.259068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.738 [2024-06-09 09:13:39.259078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.738 [2024-06-09 09:13:39.259316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.738 [2024-06-09 09:13:39.259548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.738 [2024-06-09 09:13:39.259562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.738 [2024-06-09 09:13:39.259569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.738 [2024-06-09 09:13:39.263113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.738 [2024-06-09 09:13:39.272098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.738 [2024-06-09 09:13:39.272882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.738 [2024-06-09 09:13:39.272919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.738 [2024-06-09 09:13:39.272930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.738 [2024-06-09 09:13:39.273167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.738 [2024-06-09 09:13:39.273390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.738 [2024-06-09 09:13:39.273398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.738 [2024-06-09 09:13:39.273414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.738 [2024-06-09 09:13:39.276956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.738 [2024-06-09 09:13:39.285941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.738 [2024-06-09 09:13:39.286747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.738 [2024-06-09 09:13:39.286784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:16.738 [2024-06-09 09:13:39.286794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:16.738 [2024-06-09 09:13:39.287032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:16.738 [2024-06-09 09:13:39.287254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.738 [2024-06-09 09:13:39.287262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.738 [2024-06-09 09:13:39.287270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.738 [2024-06-09 09:13:39.290821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.000 [2024-06-09 09:13:39.299818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.000 [2024-06-09 09:13:39.300630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.000 [2024-06-09 09:13:39.300667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.000 [2024-06-09 09:13:39.300678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.000 [2024-06-09 09:13:39.300916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.000 [2024-06-09 09:13:39.301139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.000 [2024-06-09 09:13:39.301147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.000 [2024-06-09 09:13:39.301155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.000 [2024-06-09 09:13:39.304710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.000 [2024-06-09 09:13:39.313734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.000 [2024-06-09 09:13:39.314501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.000 [2024-06-09 09:13:39.314538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.000 [2024-06-09 09:13:39.314548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.000 [2024-06-09 09:13:39.314786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.000 [2024-06-09 09:13:39.315009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.000 [2024-06-09 09:13:39.315017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.000 [2024-06-09 09:13:39.315025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.000 [2024-06-09 09:13:39.318578] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.000 [2024-06-09 09:13:39.327564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.000 [2024-06-09 09:13:39.328395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.000 [2024-06-09 09:13:39.328438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.000 [2024-06-09 09:13:39.328449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.000 [2024-06-09 09:13:39.328687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.000 [2024-06-09 09:13:39.328909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.000 [2024-06-09 09:13:39.328918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.000 [2024-06-09 09:13:39.328925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.000 [2024-06-09 09:13:39.332474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.000 [2024-06-09 09:13:39.341458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.000 [2024-06-09 09:13:39.342250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.000 [2024-06-09 09:13:39.342287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.000 [2024-06-09 09:13:39.342298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.000 [2024-06-09 09:13:39.342543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.000 [2024-06-09 09:13:39.342766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.000 [2024-06-09 09:13:39.342775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.000 [2024-06-09 09:13:39.342782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.000 [2024-06-09 09:13:39.346323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.000 [2024-06-09 09:13:39.355313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.000 [2024-06-09 09:13:39.356143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.000 [2024-06-09 09:13:39.356180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.000 [2024-06-09 09:13:39.356190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.000 [2024-06-09 09:13:39.356442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.000 [2024-06-09 09:13:39.356665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.000 [2024-06-09 09:13:39.356673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.000 [2024-06-09 09:13:39.356681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.000 [2024-06-09 09:13:39.360224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.000 [2024-06-09 09:13:39.369209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.370000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.370037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.370048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.370286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.370516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.370525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.370533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.374075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.383059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.383842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.383879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.383890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.384128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.384351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.384359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.384366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.387924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.396907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.397489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.397526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.397539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.397780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.398002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.398012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.398023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.401585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.410779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.411484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.411521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.411533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.411772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.411995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.412003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.412011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.415565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.424762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.425416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.425452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.425465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.425705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.425927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.425935] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.425943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.429491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.438685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.439389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.439432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.439445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.439684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.439906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.439915] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.439922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.443471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.452661] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.453462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.453500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.453510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.453748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.453970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.453979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.453986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.457537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.466522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.467177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.467214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.467224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.467471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.467694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.467704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.467711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.471258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.480458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.481118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.481156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.481166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.481412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.481636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.481645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.481652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.485198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.494397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.495094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.495112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.001 [2024-06-09 09:13:39.495120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.001 [2024-06-09 09:13:39.495344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.001 [2024-06-09 09:13:39.495569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.001 [2024-06-09 09:13:39.495580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.001 [2024-06-09 09:13:39.495587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.001 [2024-06-09 09:13:39.499135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.001 [2024-06-09 09:13:39.508322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.001 [2024-06-09 09:13:39.509000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.001 [2024-06-09 09:13:39.509015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.002 [2024-06-09 09:13:39.509022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.002 [2024-06-09 09:13:39.509241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.002 [2024-06-09 09:13:39.509463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.002 [2024-06-09 09:13:39.509471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.002 [2024-06-09 09:13:39.509478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.002 [2024-06-09 09:13:39.513016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.002 [2024-06-09 09:13:39.522198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.002 [2024-06-09 09:13:39.523014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.002 [2024-06-09 09:13:39.523051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.002 [2024-06-09 09:13:39.523061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.002 [2024-06-09 09:13:39.523299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.002 [2024-06-09 09:13:39.523528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.002 [2024-06-09 09:13:39.523538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.002 [2024-06-09 09:13:39.523545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.002 [2024-06-09 09:13:39.527093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.002 [2024-06-09 09:13:39.536082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.002 [2024-06-09 09:13:39.536837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.002 [2024-06-09 09:13:39.536874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.002 [2024-06-09 09:13:39.536884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.002 [2024-06-09 09:13:39.537123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.002 [2024-06-09 09:13:39.537344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.002 [2024-06-09 09:13:39.537352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.002 [2024-06-09 09:13:39.537360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.002 [2024-06-09 09:13:39.540918] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.002 [2024-06-09 09:13:39.549898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.002 [2024-06-09 09:13:39.550681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.002 [2024-06-09 09:13:39.550718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.002 [2024-06-09 09:13:39.550729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.002 [2024-06-09 09:13:39.550967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.002 [2024-06-09 09:13:39.551189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.002 [2024-06-09 09:13:39.551197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.002 [2024-06-09 09:13:39.551204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.002 [2024-06-09 09:13:39.554760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.264 [2024-06-09 09:13:39.563747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.264 [2024-06-09 09:13:39.564501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.264 [2024-06-09 09:13:39.564538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.264 [2024-06-09 09:13:39.564548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.264 [2024-06-09 09:13:39.564786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.264 [2024-06-09 09:13:39.565009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.264 [2024-06-09 09:13:39.565017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.264 [2024-06-09 09:13:39.565025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.264 [2024-06-09 09:13:39.568576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.264 [2024-06-09 09:13:39.577558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.264 [2024-06-09 09:13:39.578296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.264 [2024-06-09 09:13:39.578314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.264 [2024-06-09 09:13:39.578321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.264 [2024-06-09 09:13:39.578547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.264 [2024-06-09 09:13:39.578766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.264 [2024-06-09 09:13:39.578773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.264 [2024-06-09 09:13:39.578780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.264 [2024-06-09 09:13:39.582320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.264 [2024-06-09 09:13:39.591505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.264 [2024-06-09 09:13:39.592295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.264 [2024-06-09 09:13:39.592340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.264 [2024-06-09 09:13:39.592351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.264 [2024-06-09 09:13:39.592597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.264 [2024-06-09 09:13:39.592821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.264 [2024-06-09 09:13:39.592829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.264 [2024-06-09 09:13:39.592837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.264 [2024-06-09 09:13:39.596380] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.605376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.606202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.606239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.606250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.606496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.606720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.606728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.606735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.610276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.619261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.620063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.620100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.620110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.620349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.620580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.620589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.620596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.624140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.633130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.633895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.633932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.633943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.634181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.634417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.634427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.634434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.637976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.646963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.647755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.647792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.647803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.648041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.648263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.648272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.648280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.651831] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.660821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.661671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.661709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.661720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.661959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.662181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.662190] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.662197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.665747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.674736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.675595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.675632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.675643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.675882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.676104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.676113] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.676120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.679676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.688672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.689410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.689428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.689436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.689655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.689875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.689883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.689890] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.693436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.702643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.703455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.703493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.703505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.703747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.703969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.703978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.703986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.707543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.716529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.717355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.717392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.717411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.717653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.717875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.717884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.717892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.721440] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.730434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.731132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.265 [2024-06-09 09:13:39.731150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.265 [2024-06-09 09:13:39.731163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.265 [2024-06-09 09:13:39.731382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.265 [2024-06-09 09:13:39.731606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.265 [2024-06-09 09:13:39.731615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.265 [2024-06-09 09:13:39.731622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.265 [2024-06-09 09:13:39.735160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.265 [2024-06-09 09:13:39.744352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.265 [2024-06-09 09:13:39.745168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.266 [2024-06-09 09:13:39.745204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.266 [2024-06-09 09:13:39.745215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.266 [2024-06-09 09:13:39.745461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.266 [2024-06-09 09:13:39.745684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.266 [2024-06-09 09:13:39.745693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.266 [2024-06-09 09:13:39.745700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.266 [2024-06-09 09:13:39.749245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.266 [2024-06-09 09:13:39.758251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.266 [2024-06-09 09:13:39.759051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.266 [2024-06-09 09:13:39.759088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.266 [2024-06-09 09:13:39.759099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.266 [2024-06-09 09:13:39.759337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.266 [2024-06-09 09:13:39.759567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.266 [2024-06-09 09:13:39.759576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.266 [2024-06-09 09:13:39.759584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.266 [2024-06-09 09:13:39.763133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.266 [2024-06-09 09:13:39.772125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.266 [2024-06-09 09:13:39.772895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.266 [2024-06-09 09:13:39.772932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.266 [2024-06-09 09:13:39.772942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.266 [2024-06-09 09:13:39.773180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.266 [2024-06-09 09:13:39.773411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.266 [2024-06-09 09:13:39.773424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.266 [2024-06-09 09:13:39.773432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.266 [2024-06-09 09:13:39.776978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.266 [2024-06-09 09:13:39.785971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.266 [2024-06-09 09:13:39.786681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.266 [2024-06-09 09:13:39.786699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.266 [2024-06-09 09:13:39.786707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.266 [2024-06-09 09:13:39.786926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.266 [2024-06-09 09:13:39.787145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.266 [2024-06-09 09:13:39.787152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.266 [2024-06-09 09:13:39.787159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.266 [2024-06-09 09:13:39.790701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.266 [2024-06-09 09:13:39.799897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.266 [2024-06-09 09:13:39.800700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.266 [2024-06-09 09:13:39.800738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.266 [2024-06-09 09:13:39.800748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.266 [2024-06-09 09:13:39.800986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.266 [2024-06-09 09:13:39.801209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.266 [2024-06-09 09:13:39.801218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.266 [2024-06-09 09:13:39.801225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.266 [2024-06-09 09:13:39.804774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.266 [2024-06-09 09:13:39.813763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.266 [2024-06-09 09:13:39.814613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.266 [2024-06-09 09:13:39.814651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.266 [2024-06-09 09:13:39.814662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.266 [2024-06-09 09:13:39.814899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.266 [2024-06-09 09:13:39.815122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.266 [2024-06-09 09:13:39.815130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.266 [2024-06-09 09:13:39.815137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.266 [2024-06-09 09:13:39.818691] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.529 [2024-06-09 09:13:39.827680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.529 [2024-06-09 09:13:39.828421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.529 [2024-06-09 09:13:39.828442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.529 [2024-06-09 09:13:39.828450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.529 [2024-06-09 09:13:39.828670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.529 [2024-06-09 09:13:39.828890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.529 [2024-06-09 09:13:39.828898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.529 [2024-06-09 09:13:39.828904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.529 [2024-06-09 09:13:39.832454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.529 [2024-06-09 09:13:39.841652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.529 [2024-06-09 09:13:39.842471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.529 [2024-06-09 09:13:39.842508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.529 [2024-06-09 09:13:39.842520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.529 [2024-06-09 09:13:39.842762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.529 [2024-06-09 09:13:39.842984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.529 [2024-06-09 09:13:39.842993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.529 [2024-06-09 09:13:39.843000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.529 [2024-06-09 09:13:39.846552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.529 [2024-06-09 09:13:39.855540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.529 [2024-06-09 09:13:39.856280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.529 [2024-06-09 09:13:39.856297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.529 [2024-06-09 09:13:39.856304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.529 [2024-06-09 09:13:39.856530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.529 [2024-06-09 09:13:39.856749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.529 [2024-06-09 09:13:39.856757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.529 [2024-06-09 09:13:39.856763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.529 [2024-06-09 09:13:39.860303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.529 [2024-06-09 09:13:39.869501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.529 [2024-06-09 09:13:39.870255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.529 [2024-06-09 09:13:39.870292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.529 [2024-06-09 09:13:39.870303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.870553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.870777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.870786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.870793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.874337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.883327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.884111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.884148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.884158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.884396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.884626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.884635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.884642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.888186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.897261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.898057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.898094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.898106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.898345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.898583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.898593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.898600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.902146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.911136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.911917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.911954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.911965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.912203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.912433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.912442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.912455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.916000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.924991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.925778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.925815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.925826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.926064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.926287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.926295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.926302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.929855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.938853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.939685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.939722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.939732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.939970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.940193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.940202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.940209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.943763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.952753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.953587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.953623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.953633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.953871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.954094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.954104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.954112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.957664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.966655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.967486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.967522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.967534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.967774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.967996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.968006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.968013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.971566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.980558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.981139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.981157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.981165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.981384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.981611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.981619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.981626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.985167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:39.994366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:39.995203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:39.995240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:39.995250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:39.995496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.530 [2024-06-09 09:13:39.995720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.530 [2024-06-09 09:13:39.995728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.530 [2024-06-09 09:13:39.995735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.530 [2024-06-09 09:13:39.999289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.530 [2024-06-09 09:13:40.008428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.530 [2024-06-09 09:13:40.009216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.530 [2024-06-09 09:13:40.009254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.530 [2024-06-09 09:13:40.009264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.530 [2024-06-09 09:13:40.009509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.531 [2024-06-09 09:13:40.009737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.531 [2024-06-09 09:13:40.009747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.531 [2024-06-09 09:13:40.009754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.531 [2024-06-09 09:13:40.013300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.531 [2024-06-09 09:13:40.022972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.531 [2024-06-09 09:13:40.023787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.531 [2024-06-09 09:13:40.023825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.531 [2024-06-09 09:13:40.023836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.531 [2024-06-09 09:13:40.024074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.531 [2024-06-09 09:13:40.024296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.531 [2024-06-09 09:13:40.024305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.531 [2024-06-09 09:13:40.024313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.531 [2024-06-09 09:13:40.027865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.531 [2024-06-09 09:13:40.036856] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.531 [2024-06-09 09:13:40.037688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.531 [2024-06-09 09:13:40.037725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.531 [2024-06-09 09:13:40.037736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.531 [2024-06-09 09:13:40.037975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.531 [2024-06-09 09:13:40.038198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.531 [2024-06-09 09:13:40.038206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.531 [2024-06-09 09:13:40.038214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.531 [2024-06-09 09:13:40.041775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.531 [2024-06-09 09:13:40.050772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.531 [2024-06-09 09:13:40.051488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.531 [2024-06-09 09:13:40.051525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.531 [2024-06-09 09:13:40.051537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.531 [2024-06-09 09:13:40.051779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.531 [2024-06-09 09:13:40.052001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.531 [2024-06-09 09:13:40.052011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.531 [2024-06-09 09:13:40.052018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.531 [2024-06-09 09:13:40.055575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.531 [2024-06-09 09:13:40.064571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.531 [2024-06-09 09:13:40.065359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.531 [2024-06-09 09:13:40.065396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.531 [2024-06-09 09:13:40.065416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.531 [2024-06-09 09:13:40.065659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.531 [2024-06-09 09:13:40.065881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.531 [2024-06-09 09:13:40.065891] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.531 [2024-06-09 09:13:40.065899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.531 [2024-06-09 09:13:40.069448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.531 [2024-06-09 09:13:40.078437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.531 [2024-06-09 09:13:40.079243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.531 [2024-06-09 09:13:40.079280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.531 [2024-06-09 09:13:40.079293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.531 [2024-06-09 09:13:40.079540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.531 [2024-06-09 09:13:40.079764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.531 [2024-06-09 09:13:40.079773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.531 [2024-06-09 09:13:40.079781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.531 [2024-06-09 09:13:40.084189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.793 [2024-06-09 09:13:40.092317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.793 [2024-06-09 09:13:40.093134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.793 [2024-06-09 09:13:40.093171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.793 [2024-06-09 09:13:40.093183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.793 [2024-06-09 09:13:40.093428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.793 [2024-06-09 09:13:40.093651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.793 [2024-06-09 09:13:40.093660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.793 [2024-06-09 09:13:40.093668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.793 [2024-06-09 09:13:40.097209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.793 [2024-06-09 09:13:40.106213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.793 [2024-06-09 09:13:40.107024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.793 [2024-06-09 09:13:40.107061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.793 [2024-06-09 09:13:40.107076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.793 [2024-06-09 09:13:40.107314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.793 [2024-06-09 09:13:40.107544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.793 [2024-06-09 09:13:40.107553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.793 [2024-06-09 09:13:40.107561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.793 [2024-06-09 09:13:40.111105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.793 [2024-06-09 09:13:40.120099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.793 [2024-06-09 09:13:40.120801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.793 [2024-06-09 09:13:40.120838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.793 [2024-06-09 09:13:40.120849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.793 [2024-06-09 09:13:40.121087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.793 [2024-06-09 09:13:40.121310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.793 [2024-06-09 09:13:40.121319] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.793 [2024-06-09 09:13:40.121326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.793 [2024-06-09 09:13:40.124878] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.793 [2024-06-09 09:13:40.134080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.793 [2024-06-09 09:13:40.134790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.793 [2024-06-09 09:13:40.134826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.793 [2024-06-09 09:13:40.134837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.793 [2024-06-09 09:13:40.135075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.793 [2024-06-09 09:13:40.135298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.793 [2024-06-09 09:13:40.135307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.793 [2024-06-09 09:13:40.135314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.793 [2024-06-09 09:13:40.138865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.793 [2024-06-09 09:13:40.147880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.793 [2024-06-09 09:13:40.148665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.793 [2024-06-09 09:13:40.148702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.793 [2024-06-09 09:13:40.148713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.793 [2024-06-09 09:13:40.148951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.793 [2024-06-09 09:13:40.149179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.793 [2024-06-09 09:13:40.149188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.793 [2024-06-09 09:13:40.149196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.793 [2024-06-09 09:13:40.152751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.793 [2024-06-09 09:13:40.161745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.793 [2024-06-09 09:13:40.162508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.793 [2024-06-09 09:13:40.162546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.793 [2024-06-09 09:13:40.162558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.793 [2024-06-09 09:13:40.162798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.793 [2024-06-09 09:13:40.163021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.793 [2024-06-09 09:13:40.163029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.793 [2024-06-09 09:13:40.163036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.793 [2024-06-09 09:13:40.166594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.793 [2024-06-09 09:13:40.175581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.793 [2024-06-09 09:13:40.176371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.793 [2024-06-09 09:13:40.176417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.793 [2024-06-09 09:13:40.176430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.793 [2024-06-09 09:13:40.176669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.793 [2024-06-09 09:13:40.176892] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.793 [2024-06-09 09:13:40.176900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.793 [2024-06-09 09:13:40.176907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.793 [2024-06-09 09:13:40.180454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.793 [2024-06-09 09:13:40.189448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.793 [2024-06-09 09:13:40.190273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.793 [2024-06-09 09:13:40.190310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.793 [2024-06-09 09:13:40.190320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.190566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.190789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.190798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.190805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.194348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.203351] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.204136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.204173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.204184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.204429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.204653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.204662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.204670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.208218] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.217251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.218066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.218103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.218114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.218351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.218582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.218592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.218600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.222143] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.231132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.231802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.231821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.231829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.232049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.232267] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.232275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.232281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.235828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.245024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.246060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.246082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.246094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.246319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.246544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.246553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.246559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.250105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.258881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.259747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.259784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.259794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.260033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.260255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.260263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.260271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.263845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.272842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.273620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.273657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.273667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.273905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.274128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.274136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.274143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.277693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.286685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.287445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.287470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.287479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.287702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.287922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.287934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.287942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.291488] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.300476] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.301249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.301286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.301296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.301542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.301766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.301775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.301782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.305323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.794 [2024-06-09 09:13:40.314306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.794 [2024-06-09 09:13:40.315071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.794 [2024-06-09 09:13:40.315109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.794 [2024-06-09 09:13:40.315121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.794 [2024-06-09 09:13:40.315360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.794 [2024-06-09 09:13:40.315591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.794 [2024-06-09 09:13:40.315600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.794 [2024-06-09 09:13:40.315608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.794 [2024-06-09 09:13:40.319150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.795 [2024-06-09 09:13:40.328138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.795 [2024-06-09 09:13:40.328934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.795 [2024-06-09 09:13:40.328971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.795 [2024-06-09 09:13:40.328982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.795 [2024-06-09 09:13:40.329220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.795 [2024-06-09 09:13:40.329449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.795 [2024-06-09 09:13:40.329459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.795 [2024-06-09 09:13:40.329466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.795 [2024-06-09 09:13:40.333013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.795 [2024-06-09 09:13:40.341991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.795 [2024-06-09 09:13:40.342816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.795 [2024-06-09 09:13:40.342853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:17.795 [2024-06-09 09:13:40.342864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:17.795 [2024-06-09 09:13:40.343101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:17.795 [2024-06-09 09:13:40.343324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.795 [2024-06-09 09:13:40.343332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.795 [2024-06-09 09:13:40.343340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.795 [2024-06-09 09:13:40.346891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.056 [2024-06-09 09:13:40.355881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.056 [2024-06-09 09:13:40.356709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.056 [2024-06-09 09:13:40.356746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.056 [2024-06-09 09:13:40.356756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.056 [2024-06-09 09:13:40.356994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.056 [2024-06-09 09:13:40.357217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.056 [2024-06-09 09:13:40.357225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.056 [2024-06-09 09:13:40.357232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.056 [2024-06-09 09:13:40.360782] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.056 [2024-06-09 09:13:40.369767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.056 [2024-06-09 09:13:40.370494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.056 [2024-06-09 09:13:40.370512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.056 [2024-06-09 09:13:40.370520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.056 [2024-06-09 09:13:40.370739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.056 [2024-06-09 09:13:40.370958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.056 [2024-06-09 09:13:40.370965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.056 [2024-06-09 09:13:40.370972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.056 [2024-06-09 09:13:40.374516] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.056 [2024-06-09 09:13:40.383709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.384507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.384544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.384556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.384802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.385025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.385033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.385041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.388591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.397572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.398195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.398213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.398221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.398445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.398664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.398672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.398678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.402226] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.411417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.412027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.412041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.412049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.412267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.412490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.412500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.412506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.416045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.425230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.425980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.425994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.426001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.426220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.426443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.426451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.426462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.430000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.439189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.439859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.439873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.439881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.440100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.440318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.440325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.440332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.443872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.453062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.453846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.453883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.453894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.454132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.454354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.454363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.454370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.457924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.466915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.467602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.467639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.467649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.467888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.468109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.468118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.468125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.471680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.480885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.481559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.481601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.481612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.481850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.482072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.482081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.482088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.485639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.494836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.495650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.495687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.495698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.495936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.496158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.496166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.496174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.499724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.508727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.509487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.057 [2024-06-09 09:13:40.509525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.057 [2024-06-09 09:13:40.509538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.057 [2024-06-09 09:13:40.509777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.057 [2024-06-09 09:13:40.510000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.057 [2024-06-09 09:13:40.510009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.057 [2024-06-09 09:13:40.510016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.057 [2024-06-09 09:13:40.513572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.057 [2024-06-09 09:13:40.522554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.057 [2024-06-09 09:13:40.523318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.058 [2024-06-09 09:13:40.523355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.058 [2024-06-09 09:13:40.523367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.058 [2024-06-09 09:13:40.523617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.058 [2024-06-09 09:13:40.523846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.058 [2024-06-09 09:13:40.523855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.058 [2024-06-09 09:13:40.523863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.058 [2024-06-09 09:13:40.527406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.058 [2024-06-09 09:13:40.536388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.058 [2024-06-09 09:13:40.537157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.058 [2024-06-09 09:13:40.537194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.058 [2024-06-09 09:13:40.537204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.058 [2024-06-09 09:13:40.537449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.058 [2024-06-09 09:13:40.537672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.058 [2024-06-09 09:13:40.537680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.058 [2024-06-09 09:13:40.537688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.058 [2024-06-09 09:13:40.541231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.058 [2024-06-09 09:13:40.550221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.058 [2024-06-09 09:13:40.551004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.058 [2024-06-09 09:13:40.551041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.058 [2024-06-09 09:13:40.551052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.058 [2024-06-09 09:13:40.551291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.058 [2024-06-09 09:13:40.551523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.058 [2024-06-09 09:13:40.551532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.058 [2024-06-09 09:13:40.551539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.058 [2024-06-09 09:13:40.555082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.058 [2024-06-09 09:13:40.564067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.058 [2024-06-09 09:13:40.564851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.058 [2024-06-09 09:13:40.564888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.058 [2024-06-09 09:13:40.564898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.058 [2024-06-09 09:13:40.565136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.058 [2024-06-09 09:13:40.565359] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.058 [2024-06-09 09:13:40.565367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.058 [2024-06-09 09:13:40.565374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.058 [2024-06-09 09:13:40.568935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.058 [2024-06-09 09:13:40.577919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.058 [2024-06-09 09:13:40.578737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.058 [2024-06-09 09:13:40.578774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.058 [2024-06-09 09:13:40.578785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.058 [2024-06-09 09:13:40.579023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.058 [2024-06-09 09:13:40.579245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.058 [2024-06-09 09:13:40.579254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.058 [2024-06-09 09:13:40.579261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.058 [2024-06-09 09:13:40.582810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.058 [2024-06-09 09:13:40.591792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.058 [2024-06-09 09:13:40.592631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.058 [2024-06-09 09:13:40.592668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.058 [2024-06-09 09:13:40.592678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.058 [2024-06-09 09:13:40.592916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.058 [2024-06-09 09:13:40.593138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.058 [2024-06-09 09:13:40.593147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.058 [2024-06-09 09:13:40.593154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.058 [2024-06-09 09:13:40.596706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.058 [2024-06-09 09:13:40.605697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.058 [2024-06-09 09:13:40.606521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.058 [2024-06-09 09:13:40.606558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.058 [2024-06-09 09:13:40.606569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.058 [2024-06-09 09:13:40.606807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.058 [2024-06-09 09:13:40.607030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.058 [2024-06-09 09:13:40.607038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.058 [2024-06-09 09:13:40.607045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.058 [2024-06-09 09:13:40.610604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.320 [2024-06-09 09:13:40.619603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.320 [2024-06-09 09:13:40.620294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.320 [2024-06-09 09:13:40.620312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.320 [2024-06-09 09:13:40.620324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.320 [2024-06-09 09:13:40.620551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.320 [2024-06-09 09:13:40.620771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.320 [2024-06-09 09:13:40.620779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.320 [2024-06-09 09:13:40.620785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.320 [2024-06-09 09:13:40.624322] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.320 [2024-06-09 09:13:40.633512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.320 [2024-06-09 09:13:40.634236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.320 [2024-06-09 09:13:40.634252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.320 [2024-06-09 09:13:40.634259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.320 [2024-06-09 09:13:40.634482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.320 [2024-06-09 09:13:40.634702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.320 [2024-06-09 09:13:40.634709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.320 [2024-06-09 09:13:40.634716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.320 [2024-06-09 09:13:40.638252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.320 [2024-06-09 09:13:40.647427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.320 [2024-06-09 09:13:40.648140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.320 [2024-06-09 09:13:40.648154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.320 [2024-06-09 09:13:40.648162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.320 [2024-06-09 09:13:40.648380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.320 [2024-06-09 09:13:40.648604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.320 [2024-06-09 09:13:40.648612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.320 [2024-06-09 09:13:40.648619] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.320 [2024-06-09 09:13:40.652154] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.320 [2024-06-09 09:13:40.661329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.320 [2024-06-09 09:13:40.662050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.320 [2024-06-09 09:13:40.662065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.320 [2024-06-09 09:13:40.662072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.320 [2024-06-09 09:13:40.662290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.320 [2024-06-09 09:13:40.662514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.320 [2024-06-09 09:13:40.662526] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.662533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.666074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.675248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.675942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.675957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.675964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.676183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.676406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.676414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.676421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.679956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.689132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.689853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.689868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.689875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.690094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.690312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.690320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.690326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.693870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.703052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.703731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.703747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.703754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.703972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.704191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.704198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.704205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.707749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.716927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.717690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.717726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.717736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.717975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.718196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.718205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.718213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.721770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.730769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.731589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.731626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.731637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.731875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.732097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.732106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.732113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.735663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.744645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.745464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.745501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.745513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.745753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.745976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.745985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.745992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.749543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.758545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.759303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.759340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.759350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.759603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.759827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.759836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.759843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.763389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.772379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.773207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.773243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.773254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.773500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.773723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.773732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.773740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.777283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.786271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.787051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.787088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.787099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.787337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.787567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.787576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.321 [2024-06-09 09:13:40.787584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.321 [2024-06-09 09:13:40.791125] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.321 [2024-06-09 09:13:40.800113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.321 [2024-06-09 09:13:40.800888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.321 [2024-06-09 09:13:40.800925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.321 [2024-06-09 09:13:40.800936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.321 [2024-06-09 09:13:40.801174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.321 [2024-06-09 09:13:40.801396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.321 [2024-06-09 09:13:40.801421] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.322 [2024-06-09 09:13:40.801433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.322 [2024-06-09 09:13:40.804983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.322 [2024-06-09 09:13:40.813975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.322 [2024-06-09 09:13:40.814533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.322 [2024-06-09 09:13:40.814570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.322 [2024-06-09 09:13:40.814582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.322 [2024-06-09 09:13:40.814822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.322 [2024-06-09 09:13:40.815045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.322 [2024-06-09 09:13:40.815053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.322 [2024-06-09 09:13:40.815060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.322 [2024-06-09 09:13:40.818613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.322 [2024-06-09 09:13:40.827819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.322 [2024-06-09 09:13:40.828653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.322 [2024-06-09 09:13:40.828690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.322 [2024-06-09 09:13:40.828700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.322 [2024-06-09 09:13:40.828938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.322 [2024-06-09 09:13:40.829160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.322 [2024-06-09 09:13:40.829169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.322 [2024-06-09 09:13:40.829176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.322 [2024-06-09 09:13:40.832723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.322 [2024-06-09 09:13:40.841713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.322 [2024-06-09 09:13:40.842597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.322 [2024-06-09 09:13:40.842634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.322 [2024-06-09 09:13:40.842645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.322 [2024-06-09 09:13:40.842883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.322 [2024-06-09 09:13:40.843105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.322 [2024-06-09 09:13:40.843114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.322 [2024-06-09 09:13:40.843122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.322 [2024-06-09 09:13:40.846674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.322 [2024-06-09 09:13:40.855694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.322 [2024-06-09 09:13:40.856513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.322 [2024-06-09 09:13:40.856550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.322 [2024-06-09 09:13:40.856561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.322 [2024-06-09 09:13:40.856799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.322 [2024-06-09 09:13:40.857021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.322 [2024-06-09 09:13:40.857030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.322 [2024-06-09 09:13:40.857037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.322 [2024-06-09 09:13:40.860589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.322 [2024-06-09 09:13:40.869575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.322 [2024-06-09 09:13:40.870392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.322 [2024-06-09 09:13:40.870437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.322 [2024-06-09 09:13:40.870447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.322 [2024-06-09 09:13:40.870686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.322 [2024-06-09 09:13:40.870907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.322 [2024-06-09 09:13:40.870916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.322 [2024-06-09 09:13:40.870923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.322 [2024-06-09 09:13:40.874484] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.584 [2024-06-09 09:13:40.883479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.584 [2024-06-09 09:13:40.884295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.584 [2024-06-09 09:13:40.884332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.584 [2024-06-09 09:13:40.884342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.584 [2024-06-09 09:13:40.884590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.584 [2024-06-09 09:13:40.884814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.584 [2024-06-09 09:13:40.884822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.584 [2024-06-09 09:13:40.884830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.584 [2024-06-09 09:13:40.888374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.584 [2024-06-09 09:13:40.897358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:40.898064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:40.898100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:40.898111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:40.898357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:40.898589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:40.898599] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:40.898606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:40.902159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:40.911144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:40.911834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:40.911853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:40.911860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:40.912080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:40.912298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:40.912306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:40.912313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:40.915891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:40.925077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:40.925857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:40.925894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:40.925904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:40.926143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:40.926365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:40.926373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:40.926380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:40.929932] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:40.938914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:40.939679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:40.939716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:40.939727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:40.939964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:40.940187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:40.940195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:40.940208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:40.943759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:40.952735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:40.953575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:40.953612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:40.953623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:40.953861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:40.954084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:40.954092] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:40.954100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:40.957652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:40.966629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:40.967361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:40.967379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:40.967386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:40.967611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:40.967830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:40.967837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:40.967844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:40.971380] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:40.980587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:40.981258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:40.981274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:40.981282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:40.981506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:40.981725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:40.981733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:40.981739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:40.985277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:40.994464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:40.995142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:40.995162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:40.995169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:40.995388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:40.995612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:40.995621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:40.995627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:40.999163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:41.008360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:41.009124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:41.009161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:41.009172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:41.009419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:41.009643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.585 [2024-06-09 09:13:41.009652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.585 [2024-06-09 09:13:41.009659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.585 [2024-06-09 09:13:41.013202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.585 [2024-06-09 09:13:41.022178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.585 [2024-06-09 09:13:41.022858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.585 [2024-06-09 09:13:41.022876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.585 [2024-06-09 09:13:41.022884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.585 [2024-06-09 09:13:41.023102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.585 [2024-06-09 09:13:41.023321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.023328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.023335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.026875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.586 [2024-06-09 09:13:41.036056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.586 [2024-06-09 09:13:41.036726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.586 [2024-06-09 09:13:41.036742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.586 [2024-06-09 09:13:41.036749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.586 [2024-06-09 09:13:41.036968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.586 [2024-06-09 09:13:41.037190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.037198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.037205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.040747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.586 [2024-06-09 09:13:41.050023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.586 [2024-06-09 09:13:41.050719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.586 [2024-06-09 09:13:41.050736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.586 [2024-06-09 09:13:41.050743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.586 [2024-06-09 09:13:41.050962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.586 [2024-06-09 09:13:41.051180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.051188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.051194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.054736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.586 [2024-06-09 09:13:41.063914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.586 [2024-06-09 09:13:41.064741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.586 [2024-06-09 09:13:41.064778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.586 [2024-06-09 09:13:41.064788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.586 [2024-06-09 09:13:41.065026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.586 [2024-06-09 09:13:41.065249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.065257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.065265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.068815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.586 [2024-06-09 09:13:41.077801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.586 [2024-06-09 09:13:41.078570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.586 [2024-06-09 09:13:41.078608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.586 [2024-06-09 09:13:41.078618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.586 [2024-06-09 09:13:41.078856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.586 [2024-06-09 09:13:41.079079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.079087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.079095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.082657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.586 [2024-06-09 09:13:41.091644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.586 [2024-06-09 09:13:41.092464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.586 [2024-06-09 09:13:41.092501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.586 [2024-06-09 09:13:41.092513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.586 [2024-06-09 09:13:41.092752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.586 [2024-06-09 09:13:41.092974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.092982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.092990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.096539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.586 [2024-06-09 09:13:41.105531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.586 [2024-06-09 09:13:41.106336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.586 [2024-06-09 09:13:41.106373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.586 [2024-06-09 09:13:41.106384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.586 [2024-06-09 09:13:41.106630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.586 [2024-06-09 09:13:41.106854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.106862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.106869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.110414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.586 [2024-06-09 09:13:41.119393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.586 [2024-06-09 09:13:41.120211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.586 [2024-06-09 09:13:41.120248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.586 [2024-06-09 09:13:41.120258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.586 [2024-06-09 09:13:41.120506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.586 [2024-06-09 09:13:41.120729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.120738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.120745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.124286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.586 [2024-06-09 09:13:41.133275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.586 [2024-06-09 09:13:41.134081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.586 [2024-06-09 09:13:41.134119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.586 [2024-06-09 09:13:41.134134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.586 [2024-06-09 09:13:41.134372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.586 [2024-06-09 09:13:41.134605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.586 [2024-06-09 09:13:41.134614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.586 [2024-06-09 09:13:41.134621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.586 [2024-06-09 09:13:41.138170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.849 [2024-06-09 09:13:41.147161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.849 [2024-06-09 09:13:41.147950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.849 [2024-06-09 09:13:41.147987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.849 [2024-06-09 09:13:41.147997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.849 [2024-06-09 09:13:41.148235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.849 [2024-06-09 09:13:41.148468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.849 [2024-06-09 09:13:41.148477] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.849 [2024-06-09 09:13:41.148485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.849 [2024-06-09 09:13:41.152029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.849 [2024-06-09 09:13:41.161009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.849 [2024-06-09 09:13:41.161808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.849 [2024-06-09 09:13:41.161845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.849 [2024-06-09 09:13:41.161855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.849 [2024-06-09 09:13:41.162093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.849 [2024-06-09 09:13:41.162315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.849 [2024-06-09 09:13:41.162323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.849 [2024-06-09 09:13:41.162331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.849 [2024-06-09 09:13:41.165884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.849 [2024-06-09 09:13:41.174862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.849 [2024-06-09 09:13:41.175679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.849 [2024-06-09 09:13:41.175716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.849 [2024-06-09 09:13:41.175727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.849 [2024-06-09 09:13:41.175965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.849 [2024-06-09 09:13:41.176187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.849 [2024-06-09 09:13:41.176201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.849 [2024-06-09 09:13:41.176208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.849 [2024-06-09 09:13:41.179760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.849 [2024-06-09 09:13:41.188748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.849 [2024-06-09 09:13:41.189446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.849 [2024-06-09 09:13:41.189464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.849 [2024-06-09 09:13:41.189472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.849 [2024-06-09 09:13:41.189691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.849 [2024-06-09 09:13:41.189910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.849 [2024-06-09 09:13:41.189917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.849 [2024-06-09 09:13:41.189924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.849 [2024-06-09 09:13:41.193466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.849 [2024-06-09 09:13:41.202651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.849 [2024-06-09 09:13:41.203460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.849 [2024-06-09 09:13:41.203497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.849 [2024-06-09 09:13:41.203507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.849 [2024-06-09 09:13:41.203745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.203967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.203976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.203983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.207537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.216513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.217290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.217327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.217337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.217584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.217807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.217815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.217823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.221366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.230362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.231186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.231223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.231234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.231481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.231704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.231714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.231721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.235270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.244264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.245067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.245104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.245115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.245353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.245584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.245593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.245601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.249145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.258128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.258934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.258971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.258982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.259220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.259451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.259461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.259468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.263011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.271988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.272741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.272777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.272788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.273030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.273253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.273261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.273269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.276821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.285805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.286522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.286559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.286570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.286807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.287030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.287038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.287045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.290606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.299638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.300296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.300333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.300343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.300590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.300813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.300821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.300829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.304382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.313567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.314326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.314364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.314374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.314621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.314844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.314853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.314864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.318410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.327388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.328190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.328227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.328237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.328484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.328707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.850 [2024-06-09 09:13:41.328716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.850 [2024-06-09 09:13:41.328723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.850 [2024-06-09 09:13:41.332268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.850 [2024-06-09 09:13:41.341249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.850 [2024-06-09 09:13:41.342018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.850 [2024-06-09 09:13:41.342055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.850 [2024-06-09 09:13:41.342066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.850 [2024-06-09 09:13:41.342304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.850 [2024-06-09 09:13:41.342535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.851 [2024-06-09 09:13:41.342544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.851 [2024-06-09 09:13:41.342551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.851 [2024-06-09 09:13:41.346095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.851 [2024-06-09 09:13:41.355074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.851 [2024-06-09 09:13:41.355896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.851 [2024-06-09 09:13:41.355933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.851 [2024-06-09 09:13:41.355944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.851 [2024-06-09 09:13:41.356182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.851 [2024-06-09 09:13:41.356413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.851 [2024-06-09 09:13:41.356423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.851 [2024-06-09 09:13:41.356430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.851 [2024-06-09 09:13:41.359975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.851 [2024-06-09 09:13:41.368961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.851 [2024-06-09 09:13:41.369630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.851 [2024-06-09 09:13:41.369668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.851 [2024-06-09 09:13:41.369678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.851 [2024-06-09 09:13:41.369916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.851 [2024-06-09 09:13:41.370138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.851 [2024-06-09 09:13:41.370147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.851 [2024-06-09 09:13:41.370154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.851 [2024-06-09 09:13:41.373707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.851 [2024-06-09 09:13:41.382894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.851 [2024-06-09 09:13:41.383711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.851 [2024-06-09 09:13:41.383748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.851 [2024-06-09 09:13:41.383758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.851 [2024-06-09 09:13:41.383997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.851 [2024-06-09 09:13:41.384219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.851 [2024-06-09 09:13:41.384228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.851 [2024-06-09 09:13:41.384235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.851 [2024-06-09 09:13:41.387788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.851 [2024-06-09 09:13:41.396772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.851 [2024-06-09 09:13:41.397622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.851 [2024-06-09 09:13:41.397659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:18.851 [2024-06-09 09:13:41.397670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:18.851 [2024-06-09 09:13:41.397907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:18.851 [2024-06-09 09:13:41.398129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.851 [2024-06-09 09:13:41.398138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.851 [2024-06-09 09:13:41.398145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.851 [2024-06-09 09:13:41.401708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.114 [2024-06-09 09:13:41.410701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.114 [2024-06-09 09:13:41.411419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.114 [2024-06-09 09:13:41.411438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.114 [2024-06-09 09:13:41.411446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.114 [2024-06-09 09:13:41.411670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.114 [2024-06-09 09:13:41.411888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.114 [2024-06-09 09:13:41.411896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.114 [2024-06-09 09:13:41.411903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.114 [2024-06-09 09:13:41.415449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.114 [2024-06-09 09:13:41.424634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.114 [2024-06-09 09:13:41.425430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.114 [2024-06-09 09:13:41.425467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.114 [2024-06-09 09:13:41.425479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.114 [2024-06-09 09:13:41.425720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.114 [2024-06-09 09:13:41.425943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.114 [2024-06-09 09:13:41.425952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.114 [2024-06-09 09:13:41.425959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.114 [2024-06-09 09:13:41.429511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.114 [2024-06-09 09:13:41.438488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.114 [2024-06-09 09:13:41.439283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.114 [2024-06-09 09:13:41.439320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.114 [2024-06-09 09:13:41.439331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.114 [2024-06-09 09:13:41.439577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.114 [2024-06-09 09:13:41.439801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.114 [2024-06-09 09:13:41.439810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.114 [2024-06-09 09:13:41.439817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.114 [2024-06-09 09:13:41.443360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.114 [2024-06-09 09:13:41.452362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.114 [2024-06-09 09:13:41.453165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.114 [2024-06-09 09:13:41.453202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.114 [2024-06-09 09:13:41.453212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.114 [2024-06-09 09:13:41.453461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.114 [2024-06-09 09:13:41.453684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.114 [2024-06-09 09:13:41.453692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.114 [2024-06-09 09:13:41.453704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.114 [2024-06-09 09:13:41.457251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.114 [2024-06-09 09:13:41.466246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.114 [2024-06-09 09:13:41.467031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.114 [2024-06-09 09:13:41.467067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.114 [2024-06-09 09:13:41.467078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.114 [2024-06-09 09:13:41.467316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.114 [2024-06-09 09:13:41.467547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.114 [2024-06-09 09:13:41.467556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.114 [2024-06-09 09:13:41.467563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.114 [2024-06-09 09:13:41.471112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.480106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.480879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.480916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.480927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.481165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.481388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.481397] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.481414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.484961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.493960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.494657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.494695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.494707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.494946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.495169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.495178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.495185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.498747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.507760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.508374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.508396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.508411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.508631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.508850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.508858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.508864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.512411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.521610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.522384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.522429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.522439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.522678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.522900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.522909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.522916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.526474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.535480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.536208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.536227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.536234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.536460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.536681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.536688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.536695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.540240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.549482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.550192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.550207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.550215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.550440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.550664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.550672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.550679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.554223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.563435] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.564187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.564224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.564235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.564481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.564704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.564712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.564720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.568261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.577248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.578021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.578057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.578068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.578306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.578536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.578545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.578553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.582096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.591084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.591862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.115 [2024-06-09 09:13:41.591899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.115 [2024-06-09 09:13:41.591910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.115 [2024-06-09 09:13:41.592148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.115 [2024-06-09 09:13:41.592371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.115 [2024-06-09 09:13:41.592379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.115 [2024-06-09 09:13:41.592387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.115 [2024-06-09 09:13:41.595946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.115 [2024-06-09 09:13:41.604948] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.115 [2024-06-09 09:13:41.605747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.116 [2024-06-09 09:13:41.605784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.116 [2024-06-09 09:13:41.605795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.116 [2024-06-09 09:13:41.606034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.116 [2024-06-09 09:13:41.606256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.116 [2024-06-09 09:13:41.606264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.116 [2024-06-09 09:13:41.606272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.116 [2024-06-09 09:13:41.609825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.116 [2024-06-09 09:13:41.618809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.116 [2024-06-09 09:13:41.619508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.116 [2024-06-09 09:13:41.619545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.116 [2024-06-09 09:13:41.619557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.116 [2024-06-09 09:13:41.619799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.116 [2024-06-09 09:13:41.620021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.116 [2024-06-09 09:13:41.620030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.116 [2024-06-09 09:13:41.620037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.116 [2024-06-09 09:13:41.623591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.116 [2024-06-09 09:13:41.632781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.116 [2024-06-09 09:13:41.633659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.116 [2024-06-09 09:13:41.633696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.116 [2024-06-09 09:13:41.633708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.116 [2024-06-09 09:13:41.633949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.116 [2024-06-09 09:13:41.634172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.116 [2024-06-09 09:13:41.634180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.116 [2024-06-09 09:13:41.634188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.116 [2024-06-09 09:13:41.637741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.116 [2024-06-09 09:13:41.646729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.116 [2024-06-09 09:13:41.647452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.116 [2024-06-09 09:13:41.647477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.116 [2024-06-09 09:13:41.647490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.116 [2024-06-09 09:13:41.647715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.116 [2024-06-09 09:13:41.647935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.116 [2024-06-09 09:13:41.647943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.116 [2024-06-09 09:13:41.647950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.116 [2024-06-09 09:13:41.651497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.116 [2024-06-09 09:13:41.660683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.116 [2024-06-09 09:13:41.661399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.116 [2024-06-09 09:13:41.661442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.116 [2024-06-09 09:13:41.661453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.116 [2024-06-09 09:13:41.661691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.116 [2024-06-09 09:13:41.661913] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.116 [2024-06-09 09:13:41.661922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.116 [2024-06-09 09:13:41.661929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.116 [2024-06-09 09:13:41.665483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.379 [2024-06-09 09:13:41.674670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.379 [2024-06-09 09:13:41.675408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.379 [2024-06-09 09:13:41.675426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.379 [2024-06-09 09:13:41.675434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.379 [2024-06-09 09:13:41.675654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.379 [2024-06-09 09:13:41.675872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.379 [2024-06-09 09:13:41.675880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.379 [2024-06-09 09:13:41.675887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.379 [2024-06-09 09:13:41.679431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.379 [2024-06-09 09:13:41.688614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.379 [2024-06-09 09:13:41.689187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.379 [2024-06-09 09:13:41.689224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.379 [2024-06-09 09:13:41.689236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.379 [2024-06-09 09:13:41.689488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.379 [2024-06-09 09:13:41.689712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.379 [2024-06-09 09:13:41.689726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.379 [2024-06-09 09:13:41.689733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.379 [2024-06-09 09:13:41.693275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.379 [2024-06-09 09:13:41.702479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.379 [2024-06-09 09:13:41.703312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.379 [2024-06-09 09:13:41.703349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.379 [2024-06-09 09:13:41.703361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.379 [2024-06-09 09:13:41.703610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.379 [2024-06-09 09:13:41.703833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.379 [2024-06-09 09:13:41.703842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.379 [2024-06-09 09:13:41.703849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.379 [2024-06-09 09:13:41.707400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.379 [2024-06-09 09:13:41.716394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.379 [2024-06-09 09:13:41.717112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.379 [2024-06-09 09:13:41.717130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.379 [2024-06-09 09:13:41.717137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.379 [2024-06-09 09:13:41.717356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.379 [2024-06-09 09:13:41.717581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.379 [2024-06-09 09:13:41.717589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.379 [2024-06-09 09:13:41.717596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.379 [2024-06-09 09:13:41.721137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.379 [2024-06-09 09:13:41.730322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.379 [2024-06-09 09:13:41.731124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.379 [2024-06-09 09:13:41.731161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.379 [2024-06-09 09:13:41.731171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.379 [2024-06-09 09:13:41.731416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.379 [2024-06-09 09:13:41.731640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.379 [2024-06-09 09:13:41.731649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.379 [2024-06-09 09:13:41.731656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.379 [2024-06-09 09:13:41.735198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.379 [2024-06-09 09:13:41.744187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.379 [2024-06-09 09:13:41.744964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.379 [2024-06-09 09:13:41.745002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.379 [2024-06-09 09:13:41.745014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.745253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.745482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.745493] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.745500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.749045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.758035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.758900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.758937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.758949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.759188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.759417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.759426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.759434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.762976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.771963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.772694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.772731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.772743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.772982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.773204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.773213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.773220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.776772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.785760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.786509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.786547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.786559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.786805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.787028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.787037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.787044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.790597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.799591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.800273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.800310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.800320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.800566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.800790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.800798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.800806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.804362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.813566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.814231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.814269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.814279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.814524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.814747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.814756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.814763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.818304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.827506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.828197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.828215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.828223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.828447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.828667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.828674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.828685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.832227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.841418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.842041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.842057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.842064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.842283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.842506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.842514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.842521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.846060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.855254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.856021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.856058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.856068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.856307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.856537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.856547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.856554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.860097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.869088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.869717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.869754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.869765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.870004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.870226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.380 [2024-06-09 09:13:41.870235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.380 [2024-06-09 09:13:41.870243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.380 [2024-06-09 09:13:41.873876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.380 [2024-06-09 09:13:41.883080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.380 [2024-06-09 09:13:41.883882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.380 [2024-06-09 09:13:41.883919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.380 [2024-06-09 09:13:41.883930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.380 [2024-06-09 09:13:41.884167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.380 [2024-06-09 09:13:41.884389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.381 [2024-06-09 09:13:41.884398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.381 [2024-06-09 09:13:41.884412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.381 [2024-06-09 09:13:41.887962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.381 [2024-06-09 09:13:41.896946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.381 [2024-06-09 09:13:41.897658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.381 [2024-06-09 09:13:41.897696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.381 [2024-06-09 09:13:41.897706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.381 [2024-06-09 09:13:41.897945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.381 [2024-06-09 09:13:41.898167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.381 [2024-06-09 09:13:41.898175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.381 [2024-06-09 09:13:41.898182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.381 [2024-06-09 09:13:41.901732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.381 [2024-06-09 09:13:41.910934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.381 [2024-06-09 09:13:41.911683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.381 [2024-06-09 09:13:41.911721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.381 [2024-06-09 09:13:41.911733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.381 [2024-06-09 09:13:41.911974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.381 [2024-06-09 09:13:41.912197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.381 [2024-06-09 09:13:41.912205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.381 [2024-06-09 09:13:41.912212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.381 [2024-06-09 09:13:41.915768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.381 [2024-06-09 09:13:41.924757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.381 [2024-06-09 09:13:41.925511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.381 [2024-06-09 09:13:41.925549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.381 [2024-06-09 09:13:41.925561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.381 [2024-06-09 09:13:41.925803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.381 [2024-06-09 09:13:41.926030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.381 [2024-06-09 09:13:41.926039] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.381 [2024-06-09 09:13:41.926046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.381 [2024-06-09 09:13:41.929598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.644 [2024-06-09 09:13:41.938587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.645 [2024-06-09 09:13:41.939045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.645 [2024-06-09 09:13:41.939068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.645 [2024-06-09 09:13:41.939076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.645 [2024-06-09 09:13:41.939298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.645 [2024-06-09 09:13:41.939525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.645 [2024-06-09 09:13:41.939534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.645 [2024-06-09 09:13:41.939541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.645 [2024-06-09 09:13:41.943083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.645 [2024-06-09 09:13:41.952488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.645 [2024-06-09 09:13:41.953100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.645 [2024-06-09 09:13:41.953115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.645 [2024-06-09 09:13:41.953122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.645 [2024-06-09 09:13:41.953341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.645 [2024-06-09 09:13:41.953565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.645 [2024-06-09 09:13:41.953573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.645 [2024-06-09 09:13:41.953580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2839263 Killed "${NVMF_APP[@]}" "$@" 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:19.645 [2024-06-09 09:13:41.957121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2840851 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2840851 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:19.645 [2024-06-09 09:13:41.966313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2840851 ']' 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:19.645 [2024-06-09 09:13:41.967076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.645 [2024-06-09 09:13:41.967113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.645 [2024-06-09 09:13:41.967124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.645 [2024-06-09 09:13:41.967363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:19.645 [2024-06-09 09:13:41.967595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.645 [2024-06-09 09:13:41.967606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.645 [2024-06-09 09:13:41.967614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.645 09:13:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.645 [2024-06-09 09:13:41.971159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.645 [2024-06-09 09:13:41.980158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.645 [2024-06-09 09:13:41.980945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.645 [2024-06-09 09:13:41.980982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.645 [2024-06-09 09:13:41.980993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.645 [2024-06-09 09:13:41.981230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.645 [2024-06-09 09:13:41.981460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.645 [2024-06-09 09:13:41.981470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.645 [2024-06-09 09:13:41.981477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.645 [2024-06-09 09:13:41.985021] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.645 [2024-06-09 09:13:41.994016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.645 [2024-06-09 09:13:41.994739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.645 [2024-06-09 09:13:41.994758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.645 [2024-06-09 09:13:41.994766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.645 [2024-06-09 09:13:41.994985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.645 [2024-06-09 09:13:41.995204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.645 [2024-06-09 09:13:41.995211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.645 [2024-06-09 09:13:41.995218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.645 [2024-06-09 09:13:41.998769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.645 [2024-06-09 09:13:42.008003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.645 [2024-06-09 09:13:42.008762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.645 [2024-06-09 09:13:42.008799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.645 [2024-06-09 09:13:42.008809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.645 [2024-06-09 09:13:42.009047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.645 [2024-06-09 09:13:42.009270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.645 [2024-06-09 09:13:42.009279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.645 [2024-06-09 09:13:42.009286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.645 [2024-06-09 09:13:42.012840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.645 [2024-06-09 09:13:42.012833] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:19.645 [2024-06-09 09:13:42.012869] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.645 [2024-06-09 09:13:42.021832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.645 [2024-06-09 09:13:42.022689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.645 [2024-06-09 09:13:42.022727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.645 [2024-06-09 09:13:42.022738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.645 [2024-06-09 09:13:42.022975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.645 [2024-06-09 09:13:42.023198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.645 [2024-06-09 09:13:42.023207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.645 [2024-06-09 09:13:42.023214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.645 [2024-06-09 09:13:42.026766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.645 [2024-06-09 09:13:42.035759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.645 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.645 [2024-06-09 09:13:42.036571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.645 [2024-06-09 09:13:42.036608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.645 [2024-06-09 09:13:42.036619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.645 [2024-06-09 09:13:42.036858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.037080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.037089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.037097] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.040650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.049649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.646 [2024-06-09 09:13:42.050347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.646 [2024-06-09 09:13:42.050365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.646 [2024-06-09 09:13:42.050373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.646 [2024-06-09 09:13:42.050597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.050816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.050825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.050832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.054368] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.063564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.646 [2024-06-09 09:13:42.064295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.646 [2024-06-09 09:13:42.064311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.646 [2024-06-09 09:13:42.064319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.646 [2024-06-09 09:13:42.064542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.064762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.064770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.064777] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.068344] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.077539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.646 [2024-06-09 09:13:42.078263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.646 [2024-06-09 09:13:42.078278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.646 [2024-06-09 09:13:42.078286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.646 [2024-06-09 09:13:42.078509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.078728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.078735] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.078742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.082276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.086122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:19.646 [2024-06-09 09:13:42.091568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.646 [2024-06-09 09:13:42.092060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.646 [2024-06-09 09:13:42.092077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.646 [2024-06-09 09:13:42.092088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.646 [2024-06-09 09:13:42.092307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.092533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.092543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.092549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.096088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.105502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.646 [2024-06-09 09:13:42.106070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.646 [2024-06-09 09:13:42.106086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.646 [2024-06-09 09:13:42.106094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.646 [2024-06-09 09:13:42.106313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.106536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.106545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.106552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.110093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.119293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.646 [2024-06-09 09:13:42.120120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.646 [2024-06-09 09:13:42.120162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.646 [2024-06-09 09:13:42.120173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.646 [2024-06-09 09:13:42.120429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.120653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.120662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.120670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.124216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.133209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.646 [2024-06-09 09:13:42.133942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.646 [2024-06-09 09:13:42.133961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.646 [2024-06-09 09:13:42.133969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.646 [2024-06-09 09:13:42.134188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.134412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.134427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.134434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.137972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.139874] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.646 [2024-06-09 09:13:42.139899] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.646 [2024-06-09 09:13:42.139905] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.646 [2024-06-09 09:13:42.139911] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.646 [2024-06-09 09:13:42.139915] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.646 [2024-06-09 09:13:42.139952] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.646 [2024-06-09 09:13:42.140073] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.646 [2024-06-09 09:13:42.140075] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.646 [2024-06-09 09:13:42.147185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.646 [2024-06-09 09:13:42.147945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.646 [2024-06-09 09:13:42.147963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.646 [2024-06-09 09:13:42.147971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.646 [2024-06-09 09:13:42.148190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.646 [2024-06-09 09:13:42.148415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.646 [2024-06-09 09:13:42.148423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.646 [2024-06-09 09:13:42.148431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.646 [2024-06-09 09:13:42.151973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.646 [2024-06-09 09:13:42.161169] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.647 [2024-06-09 09:13:42.161960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.647 [2024-06-09 09:13:42.162000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.647 [2024-06-09 09:13:42.162012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.647 [2024-06-09 09:13:42.162257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.647 [2024-06-09 09:13:42.162489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.647 [2024-06-09 09:13:42.162499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.647 [2024-06-09 09:13:42.162507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.647 [2024-06-09 09:13:42.166053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.647 [2024-06-09 09:13:42.175046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.647 [2024-06-09 09:13:42.175727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.647 [2024-06-09 09:13:42.175767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.647 [2024-06-09 09:13:42.175784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.647 [2024-06-09 09:13:42.176026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.647 [2024-06-09 09:13:42.176249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.647 [2024-06-09 09:13:42.176258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.647 [2024-06-09 09:13:42.176266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.647 [2024-06-09 09:13:42.179817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.647 [2024-06-09 09:13:42.189013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.647 [2024-06-09 09:13:42.189525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.647 [2024-06-09 09:13:42.189563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.647 [2024-06-09 09:13:42.189575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.647 [2024-06-09 09:13:42.189817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.647 [2024-06-09 09:13:42.190039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.647 [2024-06-09 09:13:42.190049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.647 [2024-06-09 09:13:42.190056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.647 [2024-06-09 09:13:42.193657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.909 [2024-06-09 09:13:42.202873] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.909 [2024-06-09 09:13:42.203663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.909 [2024-06-09 09:13:42.203700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.909 [2024-06-09 09:13:42.203713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.909 [2024-06-09 09:13:42.203955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.909 [2024-06-09 09:13:42.204177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.909 [2024-06-09 09:13:42.204186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.909 [2024-06-09 09:13:42.204194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.909 [2024-06-09 09:13:42.207747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.909 [2024-06-09 09:13:42.216729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.909 [2024-06-09 09:13:42.217616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.909 [2024-06-09 09:13:42.217653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.909 [2024-06-09 09:13:42.217665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.909 [2024-06-09 09:13:42.217908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.909 [2024-06-09 09:13:42.218130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.909 [2024-06-09 09:13:42.218143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.909 [2024-06-09 09:13:42.218151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.909 [2024-06-09 09:13:42.221701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.909 [2024-06-09 09:13:42.230695] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.909 [2024-06-09 09:13:42.231502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.909 [2024-06-09 09:13:42.231540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.909 [2024-06-09 09:13:42.231552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.909 [2024-06-09 09:13:42.231791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.909 [2024-06-09 09:13:42.232014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.909 [2024-06-09 09:13:42.232024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.909 [2024-06-09 09:13:42.232031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.909 [2024-06-09 09:13:42.235583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.909 [2024-06-09 09:13:42.244570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.909 [2024-06-09 09:13:42.245283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.909 [2024-06-09 09:13:42.245301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.909 [2024-06-09 09:13:42.245309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.909 [2024-06-09 09:13:42.245533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.909 [2024-06-09 09:13:42.245753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.909 [2024-06-09 09:13:42.245761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.909 [2024-06-09 09:13:42.245768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.909 [2024-06-09 09:13:42.249304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.909 [2024-06-09 09:13:42.258489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.909 [2024-06-09 09:13:42.259270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.909 [2024-06-09 09:13:42.259306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.909 [2024-06-09 09:13:42.259317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.909 [2024-06-09 09:13:42.259563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.909 [2024-06-09 09:13:42.259786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.909 [2024-06-09 09:13:42.259794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.259802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.263343] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.272339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.273162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.273200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.273210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.273456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.273679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.273687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.273695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.277237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.286227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.286967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.286985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.286993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.287212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.287437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.287445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.287452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.291003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.300189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.300997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.301034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.301045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.301283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.301512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.301521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.301528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.305085] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.314074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.314909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.314946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.314956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.315198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.315428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.315437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.315445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.318989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.327980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.328740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.328777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.328788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.329026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.329248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.329256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.329263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.332819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.341811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.342656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.342694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.342704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.342943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.343166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.343174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.343181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.346732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.355719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.356487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.356524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.356536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.356776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.356999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.357007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.357019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.360572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.369556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.370390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.370433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.370444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.370682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.370905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.370913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.370921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.374468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.383455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.383813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.383833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.383840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.384060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.384279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.384287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.384293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.387839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.910 [2024-06-09 09:13:42.397239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.910 [2024-06-09 09:13:42.398036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.910 [2024-06-09 09:13:42.398073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.910 [2024-06-09 09:13:42.398084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.910 [2024-06-09 09:13:42.398322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.910 [2024-06-09 09:13:42.398551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.910 [2024-06-09 09:13:42.398560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.910 [2024-06-09 09:13:42.398567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.910 [2024-06-09 09:13:42.402111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.911 [2024-06-09 09:13:42.411109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.911 [2024-06-09 09:13:42.411653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.911 [2024-06-09 09:13:42.411690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.911 [2024-06-09 09:13:42.411701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.911 [2024-06-09 09:13:42.411939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.911 [2024-06-09 09:13:42.412161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.911 [2024-06-09 09:13:42.412170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.911 [2024-06-09 09:13:42.412177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.911 [2024-06-09 09:13:42.415726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.911 [2024-06-09 09:13:42.424926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.911 [2024-06-09 09:13:42.425253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.911 [2024-06-09 09:13:42.425271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.911 [2024-06-09 09:13:42.425279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.911 [2024-06-09 09:13:42.425505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.911 [2024-06-09 09:13:42.425725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.911 [2024-06-09 09:13:42.425732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.911 [2024-06-09 09:13:42.425739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.911 [2024-06-09 09:13:42.429276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.911 [2024-06-09 09:13:42.438892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.911 [2024-06-09 09:13:42.439724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.911 [2024-06-09 09:13:42.439761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.911 [2024-06-09 09:13:42.439772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.911 [2024-06-09 09:13:42.440010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.911 [2024-06-09 09:13:42.440232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.911 [2024-06-09 09:13:42.440241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.911 [2024-06-09 09:13:42.440249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.911 [2024-06-09 09:13:42.443801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:19.911 [2024-06-09 09:13:42.452785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:19.911 [2024-06-09 09:13:42.453483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.911 [2024-06-09 09:13:42.453501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:19.911 [2024-06-09 09:13:42.453509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:19.911 [2024-06-09 09:13:42.453733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:19.911 [2024-06-09 09:13:42.453952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.911 [2024-06-09 09:13:42.453960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:19.911 [2024-06-09 09:13:42.453967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.911 [2024-06-09 09:13:42.457510] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.173 [2024-06-09 09:13:42.466694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.173 [2024-06-09 09:13:42.467454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.173 [2024-06-09 09:13:42.467492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.173 [2024-06-09 09:13:42.467504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.173 [2024-06-09 09:13:42.467745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.173 [2024-06-09 09:13:42.467967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.173 [2024-06-09 09:13:42.467976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.173 [2024-06-09 09:13:42.467983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.173 [2024-06-09 09:13:42.471537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.173 [2024-06-09 09:13:42.480519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.173 [2024-06-09 09:13:42.481257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.173 [2024-06-09 09:13:42.481275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.173 [2024-06-09 09:13:42.481283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.173 [2024-06-09 09:13:42.481509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.173 [2024-06-09 09:13:42.481729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.173 [2024-06-09 09:13:42.481736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.481743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.485282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.494478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.495242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.495279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.495290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.495535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.495758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.495767] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.495779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.499323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.508324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.509166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.509203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.509213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.509458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.509681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.509689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.509697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.513240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.522224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.522964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.523001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.523012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.523250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.523479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.523489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.523496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.527040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.536027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.536534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.536580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.536592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.536834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.537056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.537064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.537071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.540629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.549827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.550678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.550720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.550731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.550969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.551192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.551201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.551209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.554760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.563751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.564244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.564262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.564270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.564495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.564714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.564721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.564728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.568269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.577672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.578247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.578262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.578269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.578492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.578711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.578719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.578725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.582264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.591456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.592248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.592286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.592296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.592541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.592769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.592778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.592785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.596330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.605330] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.605986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.606023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.606033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.606272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.606501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.606510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.606517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.610059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.174 [2024-06-09 09:13:42.619259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.174 [2024-06-09 09:13:42.620035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.174 [2024-06-09 09:13:42.620072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.174 [2024-06-09 09:13:42.620083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.174 [2024-06-09 09:13:42.620321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.174 [2024-06-09 09:13:42.620551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.174 [2024-06-09 09:13:42.620562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.174 [2024-06-09 09:13:42.620570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.174 [2024-06-09 09:13:42.624116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.175 [2024-06-09 09:13:42.633107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.175 [2024-06-09 09:13:42.633843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.175 [2024-06-09 09:13:42.633880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.175 [2024-06-09 09:13:42.633891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.175 [2024-06-09 09:13:42.634129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.175 [2024-06-09 09:13:42.634351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.175 [2024-06-09 09:13:42.634361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.175 [2024-06-09 09:13:42.634368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.175 [2024-06-09 09:13:42.637923] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.175 [2024-06-09 09:13:42.646919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.175 [2024-06-09 09:13:42.647647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.175 [2024-06-09 09:13:42.647665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.175 [2024-06-09 09:13:42.647673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.175 [2024-06-09 09:13:42.647892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.175 [2024-06-09 09:13:42.648112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.175 [2024-06-09 09:13:42.648120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.175 [2024-06-09 09:13:42.648126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.175 [2024-06-09 09:13:42.651669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.175 [2024-06-09 09:13:42.660862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.175 [2024-06-09 09:13:42.661687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.175 [2024-06-09 09:13:42.661724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.175 [2024-06-09 09:13:42.661734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.175 [2024-06-09 09:13:42.661972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.175 [2024-06-09 09:13:42.662194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.175 [2024-06-09 09:13:42.662203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.175 [2024-06-09 09:13:42.662210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.175 [2024-06-09 09:13:42.665759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.175 [2024-06-09 09:13:42.674748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.175 [2024-06-09 09:13:42.675600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.175 [2024-06-09 09:13:42.675637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.175 [2024-06-09 09:13:42.675648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.175 [2024-06-09 09:13:42.675886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.175 [2024-06-09 09:13:42.676108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.175 [2024-06-09 09:13:42.676117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.175 [2024-06-09 09:13:42.676125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.175 [2024-06-09 09:13:42.679680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.175 [2024-06-09 09:13:42.688668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.175 [2024-06-09 09:13:42.689505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.175 [2024-06-09 09:13:42.689542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.175 [2024-06-09 09:13:42.689557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.175 [2024-06-09 09:13:42.689795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.175 [2024-06-09 09:13:42.690018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.175 [2024-06-09 09:13:42.690026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.175 [2024-06-09 09:13:42.690034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.175 [2024-06-09 09:13:42.693586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.175 [2024-06-09 09:13:42.702586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.175 [2024-06-09 09:13:42.703454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.175 [2024-06-09 09:13:42.703492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.175 [2024-06-09 09:13:42.703504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.175 [2024-06-09 09:13:42.703744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.175 [2024-06-09 09:13:42.703966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.175 [2024-06-09 09:13:42.703975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.175 [2024-06-09 09:13:42.703982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.175 [2024-06-09 09:13:42.707539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.175 [2024-06-09 09:13:42.716530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.175 [2024-06-09 09:13:42.717375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.175 [2024-06-09 09:13:42.717418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.175 [2024-06-09 09:13:42.717431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.175 [2024-06-09 09:13:42.717670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.175 [2024-06-09 09:13:42.717893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.175 [2024-06-09 09:13:42.717901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.175 [2024-06-09 09:13:42.717910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.175 [2024-06-09 09:13:42.721459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.175 [2024-06-09 09:13:42.730456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.437 [2024-06-09 09:13:42.731249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.437 [2024-06-09 09:13:42.731287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.437 [2024-06-09 09:13:42.731298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.437 [2024-06-09 09:13:42.731543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.437 [2024-06-09 09:13:42.731767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.437 [2024-06-09 09:13:42.731780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.437 [2024-06-09 09:13:42.731788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.437 [2024-06-09 09:13:42.735331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.437 [2024-06-09 09:13:42.744319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.437 [2024-06-09 09:13:42.745179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.437 [2024-06-09 09:13:42.745215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.437 [2024-06-09 09:13:42.745226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.437 [2024-06-09 09:13:42.745476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.437 [2024-06-09 09:13:42.745699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.437 [2024-06-09 09:13:42.745708] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.437 [2024-06-09 09:13:42.745716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.437 [2024-06-09 09:13:42.749260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.437 [2024-06-09 09:13:42.758251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.437 [2024-06-09 09:13:42.759112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.437 [2024-06-09 09:13:42.759150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.759161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 [2024-06-09 09:13:42.759399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 [2024-06-09 09:13:42.759629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.759639] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.759646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.438 [2024-06-09 09:13:42.763192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.438 [2024-06-09 09:13:42.772178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.438 [2024-06-09 09:13:42.772831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.438 [2024-06-09 09:13:42.772868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.772879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 [2024-06-09 09:13:42.773117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 [2024-06-09 09:13:42.773340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.773349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.773356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.438 [2024-06-09 09:13:42.776911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.438 [2024-06-09 09:13:42.786102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:35:20.438 [2024-06-09 09:13:42.786957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.438 [2024-06-09 09:13:42.786995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.787007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:20.438 [2024-06-09 09:13:42.787245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:20.438 [2024-06-09 09:13:42.787476] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.787486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.787493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.438 [2024-06-09 09:13:42.791037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.438 [2024-06-09 09:13:42.800026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.438 [2024-06-09 09:13:42.800841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.438 [2024-06-09 09:13:42.800878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.800889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 [2024-06-09 09:13:42.801127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 [2024-06-09 09:13:42.801350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.801359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.801366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.438 [2024-06-09 09:13:42.804928] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.438 [2024-06-09 09:13:42.813918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.438 [2024-06-09 09:13:42.814702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.438 [2024-06-09 09:13:42.814739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.814750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 [2024-06-09 09:13:42.814987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 [2024-06-09 09:13:42.815210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.815218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.815226] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.438 [2024-06-09 09:13:42.818778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.438 [2024-06-09 09:13:42.827776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.438 [2024-06-09 09:13:42.828718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.438 [2024-06-09 09:13:42.828756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.828766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 [2024-06-09 09:13:42.829004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 [2024-06-09 09:13:42.829227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.829236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.829243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.438 [2024-06-09 09:13:42.830414] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.438 [2024-06-09 09:13:42.832792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.438 [2024-06-09 09:13:42.841646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.438 [2024-06-09 09:13:42.842449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.438 [2024-06-09 09:13:42.842486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.842498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 [2024-06-09 09:13:42.842740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 [2024-06-09 09:13:42.842962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.842971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.842978] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.438 [2024-06-09 09:13:42.846531] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.438 [2024-06-09 09:13:42.855525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.438 [2024-06-09 09:13:42.856289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.438 [2024-06-09 09:13:42.856326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.856336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 [2024-06-09 09:13:42.856582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 [2024-06-09 09:13:42.856805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.856820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.856827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.438 [2024-06-09 09:13:42.860377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.438 Malloc0 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.438 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.438 [2024-06-09 09:13:42.869365] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.438 [2024-06-09 09:13:42.870203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.438 [2024-06-09 09:13:42.870240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.438 [2024-06-09 09:13:42.870250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.438 [2024-06-09 09:13:42.870496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.438 [2024-06-09 09:13:42.870719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.438 [2024-06-09 09:13:42.870730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.438 [2024-06-09 09:13:42.870738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.439 [2024-06-09 09:13:42.874282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.439 [2024-06-09 09:13:42.883268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.439 [2024-06-09 09:13:42.884079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.439 [2024-06-09 09:13:42.884116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95dc0 with addr=10.0.0.2, port=4420 00:35:20.439 [2024-06-09 09:13:42.884127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95dc0 is same with the state(5) to be set 00:35:20.439 [2024-06-09 09:13:42.884364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95dc0 (9): Bad file descriptor 00:35:20.439 [2024-06-09 09:13:42.884595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:20.439 [2024-06-09 09:13:42.884605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:20.439 [2024-06-09 09:13:42.884612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:20.439 [2024-06-09 09:13:42.888160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.439 [2024-06-09 09:13:42.896249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.439 [2024-06-09 09:13:42.897148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.439 09:13:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2839629 00:35:20.702 [2024-06-09 09:13:43.021037] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:28.909 00:35:28.909 Latency(us) 00:35:28.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.909 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:28.909 Verification LBA range: start 0x0 length 0x4000 00:35:28.909 Nvme1n1 : 15.00 8588.40 33.55 9820.79 0.00 6927.77 1365.33 20425.39 00:35:28.909 =================================================================================================================== 00:35:28.909 Total : 8588.40 33.55 9820.79 0.00 6927.77 1365.33 20425.39 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:29.170 rmmod nvme_tcp 00:35:29.170 rmmod nvme_fabrics 00:35:29.170 rmmod nvme_keyring 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2840851 ']' 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2840851 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 2840851 ']' 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 2840851 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2840851 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2840851' 00:35:29.170 killing process with pid 2840851 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 2840851 00:35:29.170 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 2840851 00:35:29.432 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:29.432 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:29.432 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:29.432 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:29.432 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:29.432 09:13:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.432 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:29.432 09:13:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.345 09:13:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:31.346 00:35:31.346 real 0m27.567s 00:35:31.346 user 1m3.193s 00:35:31.346 sys 0m6.821s 00:35:31.346 09:13:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:31.346 09:13:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.346 ************************************ 00:35:31.346 END TEST nvmf_bdevperf 00:35:31.346 ************************************ 00:35:31.346 09:13:53 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:31.346 09:13:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:31.346 09:13:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:31.346 09:13:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.346 ************************************ 00:35:31.346 START TEST nvmf_target_disconnect 00:35:31.346 ************************************ 00:35:31.346 09:13:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:31.607 * Looking for test storage... 00:35:31.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:31.607 09:13:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.608 09:13:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:31.608 09:13:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:38.202 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.202 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:38.203 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:38.203 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:38.203 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:38.203 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:38.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:35:38.464 00:35:38.464 --- 10.0.0.2 ping statistics --- 00:35:38.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.464 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:35:38.464 00:35:38.464 --- 10.0.0.1 ping statistics --- 00:35:38.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.464 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:38.464 ************************************ 00:35:38.464 START TEST nvmf_target_disconnect_tc1 00:35:38.464 ************************************ 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:38.464 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:38.465 09:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:38.465 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.725 [2024-06-09 09:14:01.030390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.725 [2024-06-09 09:14:01.030449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62f1d0 with addr=10.0.0.2, port=4420 00:35:38.725 [2024-06-09 09:14:01.030477] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:38.725 [2024-06-09 09:14:01.030492] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:38.725 [2024-06-09 09:14:01.030500] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:35:38.725 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:38.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:38.725 Initializing NVMe Controllers 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:38.725 00:35:38.725 real 0m0.110s 00:35:38.725 user 0m0.047s 00:35:38.725 sys 0m0.061s 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:38.725 ************************************ 00:35:38.725 END TEST nvmf_target_disconnect_tc1 00:35:38.725 ************************************ 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:38.725 09:14:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:38.726 ************************************ 00:35:38.726 START TEST nvmf_target_disconnect_tc2 00:35:38.726 ************************************ 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2846673 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2846673 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2846673 ']' 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:38.726 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:38.726 [2024-06-09 09:14:01.180010] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:38.726 [2024-06-09 09:14:01.180057] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:38.726 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.726 [2024-06-09 09:14:01.262752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:38.987 [2024-06-09 09:14:01.351280] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.987 [2024-06-09 09:14:01.351331] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.987 [2024-06-09 09:14:01.351340] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:38.987 [2024-06-09 09:14:01.351347] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:38.987 [2024-06-09 09:14:01.351353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.987 [2024-06-09 09:14:01.351512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:35:38.987 [2024-06-09 09:14:01.351778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:35:38.987 [2024-06-09 09:14:01.352068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:35:38.987 [2024-06-09 09:14:01.352072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:35:39.557 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:39.557 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:35:39.557 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:39.557 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:39.557 09:14:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.557 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.557 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:39.557 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.557 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.557 Malloc0 00:35:39.557 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.557 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:39.557 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.558 [2024-06-09 09:14:02.041731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.558 [2024-06-09 09:14:02.081991] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2847023 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:39.558 09:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:39.818 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.739 09:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2846673 00:35:41.739 09:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Read completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 Write completed with error (sct=0, sc=8) 00:35:41.739 starting I/O failed 00:35:41.739 [2024-06-09 09:14:04.113771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:41.739 [2024-06-09 09:14:04.114306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.114324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.114929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.114966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.115307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.115319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.115879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.115916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.116357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.116369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.116866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.116903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.117277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.117294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.117875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.117911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.118406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.118419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.119039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.119076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.119637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.119673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.120172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.120185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.120653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.120689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.121168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.121180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.121410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.121432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.121779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.121790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.122277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.122287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.123672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.123708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.123948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.123963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.124450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.124462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.124911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.124922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.125396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.739 [2024-06-09 09:14:04.125411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.739 qpair failed and we were unable to recover it. 00:35:41.739 [2024-06-09 09:14:04.125741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.125751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.126190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.126200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.126767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.126804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.127281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.127293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.127671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.127682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.128124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.128134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.128507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.128517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.128858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.128868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.129347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.129357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.129833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.129843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.130321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.130331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.130807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.130817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.131293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.131302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.131737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.131747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.132184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.132193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.132764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.132800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.133286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.133298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.133756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.133767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.134243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.134253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.134705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.134741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.135107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.135119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.135680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.135716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.136199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.136210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.136669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.136704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.137221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.137238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.137785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.137821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.138098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.138112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.138605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.138616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.138942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.138951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.139416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.139426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.139795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.139805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.140253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.140263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.140875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.140918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.141200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.141220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.141768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.141811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.142229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.142244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.142821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.142864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.143353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.740 [2024-06-09 09:14:04.143368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.740 qpair failed and we were unable to recover it. 00:35:41.740 [2024-06-09 09:14:04.143935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.143978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.144590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.144633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.145109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.145123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.145680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.145723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.146202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.146216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.146756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.146799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.147290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.147304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.147814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.147827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.148173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.148184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.148739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.148781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.149257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.149271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.149714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.149757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.150245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.150260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.150699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.150757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.151275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.151295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.151738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.151756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.152175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.152192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.152727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.152782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.153249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.153269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.153714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.153732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.154075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.154091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.154513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.154530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.154978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.154994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.155449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.155465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.155938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.155954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.156393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.156416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.156893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.156915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.157302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.157319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.157728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.157745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.158120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.158135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.158530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.158546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.158988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.159004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.159452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.159467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.159841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.159857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.160245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.160260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.160544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.160570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.161013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.161033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.741 qpair failed and we were unable to recover it. 00:35:41.741 [2024-06-09 09:14:04.161424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.741 [2024-06-09 09:14:04.161444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.161938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.161957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.162495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.162516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.162975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.162995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.163481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.163502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.163980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.163999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.164478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.164498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.164990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.165010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.165493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.165513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.165988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.166008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.166456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.166477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.166945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.166964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.167442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.167462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.167943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.167963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.168438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.168477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.168951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.168970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.169443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.169464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.169898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.169918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.170394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.170427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.170896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.170916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.171390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.171416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.171798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.171818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.172358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.172385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.172898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.172926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.173438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.173467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.173956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.173983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.174442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.174470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.174837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.174865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.175374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.175409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.175873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.175906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.176377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.176410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.176892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.176919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.177400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.177434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.177929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.177956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.178457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.178485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.178966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.178992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.179626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.179714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.180192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.742 [2024-06-09 09:14:04.180225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.742 qpair failed and we were unable to recover it. 00:35:41.742 [2024-06-09 09:14:04.180806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.180894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.181474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.181525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.182021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.182050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.182453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.182492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.183005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.183033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.183436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.183474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.183955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.183983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.184484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.184512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.184991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.185019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.185555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.185584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.185979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.186010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.186491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.186521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.186913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.186940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.187332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.187362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.187850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.187879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.188354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.188381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.188961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.188989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.189503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.189532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.190014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.190043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.190540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.190568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.191062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.191089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.191606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.191634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.192132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.192159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.192662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.192690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.193170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.193197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.193770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.193858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.194449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.194486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.194847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.194878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.195383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.195422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.195953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.743 [2024-06-09 09:14:04.195980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.743 qpair failed and we were unable to recover it. 00:35:41.743 [2024-06-09 09:14:04.196586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.196674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.197236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.197279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.197839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.197870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.198377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.198415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.198892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.198920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.199621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.199710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.200289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.200324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.200813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.200843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.201238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.201266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.201769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.201798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.202185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.202212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.202693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.202722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.203110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.203143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.203721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.203808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.204433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.204469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.204983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.205013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.205504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.205534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.206027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.206055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.206606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.206635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.207014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.207041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.207541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.207569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.208048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.208075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.208574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.208602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.209117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.209144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.209652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.209679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.210177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.210204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.210687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.210716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.211193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.211220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.211821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.211908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.212590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.212679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.213268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.213302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.213779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.213810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.214291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.214319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.214831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.214860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.215358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.215385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.215787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.215829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.216319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.216347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.216836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.216865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.744 qpair failed and we were unable to recover it. 00:35:41.744 [2024-06-09 09:14:04.217259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.744 [2024-06-09 09:14:04.217290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.217790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.217818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.218328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.218355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.218891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.218930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.219460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.219501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.220023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.220050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.220540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.220568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.221047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.221074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.221572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.221600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.222087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.222114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.222592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.222620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.223010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.223037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.223528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.223557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.224075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.224101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.224510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.224538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.225007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.225034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.225527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.225554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.226050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.226078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.226575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.226603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.227072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.227099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.227597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.227626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.228151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.228178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.228655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.228683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.229187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.229214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.229791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.229880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.230477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.230515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.231081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.231111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.231601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.231630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.231977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.232004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.232496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.232524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.232916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.232949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.233322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.233349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.233906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.233935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.234425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.234453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.234958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.234986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.235484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.235512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.235994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.236022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.236502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.745 [2024-06-09 09:14:04.236530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.745 qpair failed and we were unable to recover it. 00:35:41.745 [2024-06-09 09:14:04.237035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.237061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.237544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.237573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.238052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.238079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.238577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.238606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.239089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.239116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.239599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.239634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.240165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.240192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.240703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.240731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.241236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.241263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.241845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.241874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.242389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.242425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.242920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.242948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.243365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.243392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.243846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.243874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.244387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.244424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.244944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.244971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.245451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.245481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.245962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.245989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.246636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.246724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.247279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.247315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.247789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.247819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.248316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.248344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.248859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.248888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.249399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.249439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.249978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.250006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.250600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.250690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.251288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.251323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.251812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.251843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.252329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.252359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.252865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.252895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.253387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.253428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.253814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.253854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.254295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.254331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.254847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.254879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.255248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.255280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.255849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.255878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.256359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.256386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.256879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.256907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.746 qpair failed and we were unable to recover it. 00:35:41.746 [2024-06-09 09:14:04.257385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.746 [2024-06-09 09:14:04.257421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.257951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.257980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.258481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.258525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.259049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.259076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.259559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.259587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.260062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.260088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.260571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.260599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.260990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.261020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.261524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.261553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.262005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.262032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.262517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.262546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.263117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.263144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.263662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.263690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.264188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.264214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.264798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.264887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.265644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.265732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.266321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.266356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.266881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.266912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.267389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.267428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.267935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.267962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.268632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.268720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.269298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.269334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.269844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.269874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.270356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.270384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.270878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.270908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.271416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.271444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.271944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.271971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.272655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.272758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.273348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.273383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.273771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.273801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.274312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.274340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.275001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.275090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.275670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.275758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.276344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.276378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.276922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.276963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.277447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.277480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.277998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.278026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.278430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.278472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.747 qpair failed and we were unable to recover it. 00:35:41.747 [2024-06-09 09:14:04.278979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.747 [2024-06-09 09:14:04.279008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.279517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.279546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.280058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.280086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.280586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.280616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.281117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.281145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.281640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.281668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.282161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.282188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.282789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.282879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.283637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.283730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.284329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.284363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.284885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.284915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.285424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.285453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.285957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.285985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.286647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.286736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.287327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.287361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.287899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.287930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.288468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.288512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.289076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.289103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:41.748 [2024-06-09 09:14:04.289591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.748 [2024-06-09 09:14:04.289682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:41.748 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.290855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.290904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.291430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.291462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.291852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.291885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.292231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.292258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.292755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.292785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.293267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.293295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.293817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.293846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.294331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.294358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.294852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.294882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.295379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.295414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.295880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.295907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.296382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.296424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.296899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.296926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.018 qpair failed and we were unable to recover it. 00:35:42.018 [2024-06-09 09:14:04.297323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.018 [2024-06-09 09:14:04.297350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.297874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.297903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.298399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.298438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.298958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.298986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.299477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.299512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.300022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.300050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.300671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.300762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.301342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.301376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.301902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.301932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.302462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.302505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.302993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.303021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.303530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.303559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.304070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.304096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.304528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.304569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.305068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.305096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.305596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.305626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.306183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.306211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.306687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.306777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.307388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.307447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.307961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.307989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.308640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.308730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.309288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.309323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.309881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.309912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.310389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.310429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.311022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.311049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.311646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.311736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.312317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.312353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.312869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.312899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.313384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.313430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.313937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.313966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.314425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.314460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.314973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.315001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.315595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.315685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.316273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.316307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.316914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.317003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.317653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.317743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.318341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.318376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.019 [2024-06-09 09:14:04.318930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.019 [2024-06-09 09:14:04.318959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.019 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.319464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.319507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.320027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.320055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.320672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.320761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.321230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.321264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.321766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.321798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.322336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.322364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.322888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.322929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.323282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.323309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.323791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.323821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.324318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.324346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.324865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.324894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.325420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.325449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.325972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.326000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.326599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.326689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.327160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.327196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.327681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.327771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.328240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.328274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.328657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.328688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.329218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.329246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.329745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.329773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.330281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.330309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.330797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.330827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.331309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.331336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.331831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.331860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.332247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.332275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.332815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.332845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.333223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.333263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.333749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.333779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.334261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.334289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.334776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.334805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.335321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.335349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.335834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.335863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.336374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.336413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.336936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.336965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.337648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.337739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.338340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.338375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.338898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.338928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.020 qpair failed and we were unable to recover it. 00:35:42.020 [2024-06-09 09:14:04.339463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.020 [2024-06-09 09:14:04.339507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.340031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.340059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.340546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.340576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.341078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.341106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.341611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.341640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.342124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.342151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.342711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.342739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.343244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.343272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.343799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.343828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.344332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.344370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.344772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.344801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.345165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.345192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.345672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.345701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.346188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.346215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.346755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.346845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.347472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.347527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.347934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.347968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.348349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.348376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.348898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.348927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.349420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.349449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.349953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.349980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.350595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.350687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.351289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.351323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.351846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.351878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.352252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.352279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.352791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.352820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.353316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.353344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.353869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.353899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.354417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.354447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.354973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.355001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.355594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.355684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.356276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.356312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.356797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.356829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.357322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.357350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.357841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.357870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.358386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.358429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.358936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.358965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.359626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.021 [2024-06-09 09:14:04.359717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.021 qpair failed and we were unable to recover it. 00:35:42.021 [2024-06-09 09:14:04.360309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.360344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.360850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.360880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.361365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.361393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.361834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.361879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.362294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.362328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.362848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.362878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.363379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.363418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.363816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.363847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.364334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.364362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.364853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.364883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.365360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.365387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.365902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.365943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.366443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.366473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.366937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.366965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.367459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.367490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.367963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.367991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.368509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.368538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.369051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.369079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.369581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.369609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.370111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.370138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.370638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.370666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.371152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.371179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.371758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.371786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.372311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.372339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.372855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.372883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.373371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.373399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.373775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.373810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.374243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.374271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.374669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.374706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.375188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.375216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.375723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.375753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.376250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.376278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.376682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.376711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.377196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.377224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.377731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.377760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.378260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.378287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.378821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.378849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.022 [2024-06-09 09:14:04.379214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.022 [2024-06-09 09:14:04.379242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.022 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.379731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.379760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.380249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.380276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.380763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.380791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.381313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.381340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.381829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.381858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.382242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.382269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.382733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.382761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.383151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.383178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.383716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.383746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.384141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.384168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.384655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.384683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.385075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.385102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.385592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.385620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.386106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.386140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.386545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.386572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.387057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.387085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.387589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.387617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.388134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.388160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.388769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.388861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.389460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.389497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.390040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.390069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.390561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.390590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.390975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.391003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.391519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.391549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.392045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.392072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.392557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.392587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.393055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.393082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.393478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.393512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.394011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.394038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.394526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.023 [2024-06-09 09:14:04.394554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.023 qpair failed and we were unable to recover it. 00:35:42.023 [2024-06-09 09:14:04.395070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.395097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.395583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.395611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.396116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.396144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.396663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.396692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.397205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.397233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.397805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.397897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.398643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.398735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.399326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.399360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.399880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.399912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.400424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.400455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.401000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.401028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.401606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.401696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.402280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.402314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.402456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ce30 is same with the state(5) to be set 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Write completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 Read completed with error (sct=0, sc=8) 00:35:42.024 starting I/O failed 00:35:42.024 [2024-06-09 09:14:04.403255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:42.024 [2024-06-09 09:14:04.403856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.403901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.404349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.404380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.404971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.405060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.405721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.405811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.406367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.406419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.407011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.407100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.407686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.407776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.408099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.408134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.408745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.408834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.409440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.409479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.410018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.410047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.410560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.410589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.411100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.411128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.411631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.024 [2024-06-09 09:14:04.411659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.024 qpair failed and we were unable to recover it. 00:35:42.024 [2024-06-09 09:14:04.412163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.412190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.412754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.412844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.413428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.413474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.414008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.414037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.414732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.414822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.415421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.415459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.415997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.416026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.416679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.416767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.417355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.417391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.417905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.417935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.418459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.418502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.419034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.419062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.419661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.419749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.420342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.420377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.420908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.420938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.421465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.421509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.422032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.422060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.422555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.422583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.423072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.423099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.423515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.423557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.424086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.424114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.424427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.424463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.425080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.425107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.425619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.425648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.426133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.426161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.426643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.426672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.427153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.427180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.427662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.427690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.428184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.428211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.428689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.428779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.429251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.429287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.429831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.429862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.430341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.430369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.430861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.430891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.431388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.431432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.431930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.431957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.432603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.432692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.433159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.433194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.025 qpair failed and we were unable to recover it. 00:35:42.025 [2024-06-09 09:14:04.433617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.025 [2024-06-09 09:14:04.433659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.434166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.434199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.434779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.434869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.435476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.435528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.435968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.436014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.436520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.436549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.437073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.437100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.437607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.437637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.438119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.438146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.438633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.438662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.439136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.439164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.439746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.439833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.440383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.440432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.440933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.440962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.441465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.441508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.442012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.442039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.442427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.442459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.442947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.442974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.443466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.443495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.443883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.443921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.444306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.444334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.444725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.444762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.445159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.445187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.445695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.445724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.446229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.446256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.446731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.446759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.447125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.447152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.447639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.447666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.448162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.448190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.448667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.448695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.449180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.449207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.449795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.449884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.450464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.450516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.451034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.451063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.451563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.451592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.452076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.452103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.452585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.452613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.453117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.453146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.026 [2024-06-09 09:14:04.453625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.026 [2024-06-09 09:14:04.453655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.026 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.454135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.454161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.454784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.454876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.455449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.455486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.455986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.456014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.456607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.456697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.457283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.457328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.457804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.457836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.458313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.458340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.458844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.458872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.459296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.459323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.459895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.459981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.460474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.460525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.461018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.461046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.461552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.461582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.462061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.462089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.462565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.462593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.463093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.463120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.463602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.463630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.464006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.464033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.464531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.464561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.465031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.465059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.465434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.465463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.465924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.465952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.466444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.466472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.466988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.467015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.467511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.467538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.468002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.468030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.468520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.468549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.468927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.468955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.469441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.469470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.469993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.470021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.470415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.470455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.027 qpair failed and we were unable to recover it. 00:35:42.027 [2024-06-09 09:14:04.470873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.027 [2024-06-09 09:14:04.470909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.471415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.471445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.471809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.471836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.472343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.472370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.472865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.472894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.473386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.473425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.473939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.473965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.474344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.474378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.474692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.474721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.475249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.475276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.475754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.475782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.476284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.476311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.476833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.476862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.477338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.477372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.477843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.477871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.478357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.478384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.478874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.478902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.479396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.479433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.479963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.479990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.480465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.480513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.481091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.481119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.481691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.481779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.482363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.482397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.482904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.482933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.483421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.483450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.483978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.484009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.484598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.484685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.485147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.485184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.485661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.485747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.486318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.486353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.486854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.486884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.487369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.487397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.487879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.487907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.488278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.488306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.488789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.488821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.489309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.489337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.489728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.489758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.490257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.490284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.028 [2024-06-09 09:14:04.490779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.028 [2024-06-09 09:14:04.490807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.028 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.491305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.491333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.491749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.491778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.492318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.492346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.492785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.492814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.493181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.493209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.493788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.493875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.494341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.494376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.494879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.494909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.495382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.495417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.495897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.495925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.496304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.496330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.496893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.496980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.497441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.497478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.497986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.498016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.498608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.498705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.499165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.499204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.499800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.499887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.500480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.500531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.501017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.501045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.501523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.501552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.502052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.502079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.502564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.502592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.503073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.503100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.503486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.503514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.504010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.504037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.504536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.504565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.505063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.505090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.505573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.505601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.505982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.506009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.506490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.506519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.506901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.506928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.507428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.507456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.507952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.507980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.508462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.508491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.509009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.509036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.509534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.509563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.510061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.510088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.029 [2024-06-09 09:14:04.510593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.029 [2024-06-09 09:14:04.510620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.029 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.511120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.511147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.511542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.511581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.511932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.511960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.512457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.512488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.512852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.512880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.513387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.513426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.513901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.513927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.514412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.514440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.514939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.514966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.515580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.515669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.516268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.516302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.516797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.516828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.517325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.517353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.517838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.517867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.518215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.518243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.518726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.518755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.519127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.519154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.519548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.519582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.520072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.520101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.520617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.520647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.521131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.521158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.521661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.521689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.522202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.522230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.522828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.522915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.523597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.523687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.524213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.524247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.524742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.524774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.525268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.525296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.525820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.525849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.526329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.526357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.526878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.526908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.527386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.527422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.527893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.527921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.528423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.528452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.528843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.528869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.529396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.529431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.529979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.530005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.030 [2024-06-09 09:14:04.530488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.030 [2024-06-09 09:14:04.530516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.030 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.531007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.531034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.531613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.531700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.532281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.532316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.532812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.532844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.533229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.533256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.533746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.533785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.534255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.534285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.534764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.534793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.535274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.535302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.535821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.535849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.536322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.536349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.536831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.536861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.537362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.537390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.537887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.537915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.538290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.538316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.538806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.538835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.539337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.539365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.539855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.539883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.540358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.540386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.540919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.540948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.541635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.541722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.542302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.542335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.542855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.542888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.543412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.543442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.543952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.543979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.544469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.544510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.544881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.544909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.545423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.545452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.545936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.545963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.546459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.546499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.546994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.547021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.547631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.547718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.548289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.548324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.548798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.548829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.549332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.549360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.549862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.549890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.550371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.550399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.550839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.031 [2024-06-09 09:14:04.550867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.031 qpair failed and we were unable to recover it. 00:35:42.031 [2024-06-09 09:14:04.551357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.551384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.551754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.551782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.552152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.552189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.552590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.552625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.553119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.553149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.553730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.553815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.554400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.554449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.554947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.554987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.555645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.555733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.556320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.556354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.556782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.556811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.557295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.557322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.557861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.557890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.558371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.558398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.558954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.558983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.559370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.559397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.559885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.559912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.560433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.560463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.560991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.561018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.561510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.561539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.562042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.562069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.562654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.562744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.563328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.563362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.563857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.563887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.564412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.564441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.564938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.564966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.565589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.565676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.566273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.566307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.566923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.567009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.567482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.567534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.032 [2024-06-09 09:14:04.568064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.032 [2024-06-09 09:14:04.568093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.032 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.568574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.568605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.569083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.569110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.569588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.569617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.569968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.569996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.570474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.570503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.570925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.570965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.571470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.571499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.572010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.572038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.572556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.572584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.573124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.573151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.573625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.573654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.574170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.574197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.574679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.574707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.575207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.575233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.575804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.575890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.576651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.576738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.577298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.577343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.577881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.577914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.578421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.377 [2024-06-09 09:14:04.578450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.377 qpair failed and we were unable to recover it. 00:35:42.377 [2024-06-09 09:14:04.578928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.578955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.579336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.579363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.579990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.580077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.580714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.580801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.581386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.581438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.581869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.581910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.582396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.582434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.582942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.582969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.583456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.583496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.584018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.584046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.584679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.584765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.585357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.585391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.585913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.585942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.586636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.586723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.587304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.587338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.587828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.587858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.588338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.588366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.588876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.588905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.589411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.589440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.589938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.589966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.590398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.590436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.591017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.591044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.591529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.591559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.592037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.592064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.592669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.592755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.593343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.593378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.593892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.593921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.594431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.594462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.594988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.595017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.595499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.595528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.596041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.378 [2024-06-09 09:14:04.596069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.378 qpair failed and we were unable to recover it. 00:35:42.378 [2024-06-09 09:14:04.596533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.596561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.596947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.596973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.597353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.597380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.597895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.597922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.598409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.598437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.598949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.598976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.599634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.599732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.600314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.600349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.600866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.600896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.601381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.601417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.601945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.601972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.602589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.602676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.603271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.603307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.603787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.603818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.604315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.604342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.604735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.604764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.605264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.605291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.605838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.605866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.606351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.606378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.606757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.606785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.607294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.607322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.607845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.607875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.608262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.608300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.608783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.608812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.609290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.609317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.609840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.609868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.610359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.610386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.610888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.610916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.611381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.611425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.611789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.611816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.612293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.612320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.612847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.612877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.613379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.613414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.613921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.613949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.614454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.614494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.615038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.615065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.615718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.615806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.379 [2024-06-09 09:14:04.616429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.379 [2024-06-09 09:14:04.616466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.379 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.616942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.616972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.617285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.617326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.617845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.617875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.618372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.618399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.618906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.618934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.619420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.619448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.619944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.619971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.620649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.620735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.621323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.621369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.621883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.621914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.622412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.622441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.622930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.622957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.623635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.623723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.624216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.624249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.624757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.624788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.625286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.625313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.625841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.625870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.626347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.626374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.626869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.626897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.627412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.627441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.627972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.627999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.628395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.628438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.628939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.628966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.629329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.629356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.629862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.629891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.630261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.630288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.630865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.630952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.631668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.631756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.632342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.632376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.632902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.632932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.633589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.633677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.634262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.634299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.634728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.380 [2024-06-09 09:14:04.634759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.380 qpair failed and we were unable to recover it. 00:35:42.380 [2024-06-09 09:14:04.635258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.635285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.635818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.635846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.636352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.636381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.636894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.636923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.637415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.637444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.637971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.637998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.638595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.638682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.639268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.639302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.639687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.639716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.640200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.640228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.640803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.640890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.641602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.641691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.642264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.642298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.642588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.642618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.642957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.642984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.643478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.643516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.644026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.644054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.644443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.644472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.644856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.644884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.645391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.645430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.645925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.645953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.646343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.646370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.646870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.646899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.647435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.647463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.647839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.647867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.648355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.648382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.648871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.648899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.649395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.649433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.649914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.649941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.650455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.650496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.651013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.651040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.651521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.651550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.651908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.651935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.652324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.652351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.652860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.652888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.653377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.653410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.653854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.381 [2024-06-09 09:14:04.653881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.381 qpair failed and we were unable to recover it. 00:35:42.381 [2024-06-09 09:14:04.654361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.654388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.654793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.654832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.655320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.655347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.655840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.655868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.656334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.656362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.656890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.656919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.657399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.657437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.657959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.657987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.658484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.658512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.659007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.659034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.659641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.659727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.660296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.660331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.660849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.660879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.661268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.661295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.661799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.661829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.662317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.662345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.662884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.662913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.663393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.663430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.663929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.663966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.664464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.664505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.665013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.665040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.665645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.665732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.666316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.666350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.666847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.666877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.667360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.667388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.667913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.667942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.668473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.382 [2024-06-09 09:14:04.668515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.382 qpair failed and we were unable to recover it. 00:35:42.382 [2024-06-09 09:14:04.669008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.669035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.669525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.669553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.669953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.669986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.670472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.670500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.670971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.670998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.671503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.671532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.672026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.672053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.672536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.672565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.673053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.673079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.673574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.673602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.674081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.674108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.674600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.674628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.675110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.675138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.675644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.675672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.676168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.676195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.676671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.676700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.677200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.677227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.677813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.677901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.678439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.678479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.678888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.678917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.679421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.679450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.679941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.679968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.680469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.680512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.681012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.681039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.681428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.681456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.681938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.681965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.682468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.682496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.682985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.683012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.683493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.683521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.684040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.684067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.684550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.684582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.685069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.685104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.685577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.685605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.383 [2024-06-09 09:14:04.686087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.383 [2024-06-09 09:14:04.686114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.383 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.686595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.686623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.687130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.687157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.687546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.687578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.688077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.688103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.688600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.688647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.689208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.689234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.689794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.689881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.690426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.690462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.690964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.690993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.691635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.691722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.692294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.692328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.692864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.692896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.693383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.693437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.693982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.694009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.694382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.694416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.694899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.694926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.695427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.695455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.695968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.695995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.696642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.696729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.697302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.697337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.697814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.697845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.698229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.698266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.698619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.698649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.699126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.699153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.699549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.699588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.699987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.700019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.700384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.700427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.700901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.700929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.701437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.701468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.702030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.702057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.702450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.702482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.702999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.703026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.703512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.703540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.704048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.704077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.704549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.704579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.704958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.704992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.705497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.384 [2024-06-09 09:14:04.705527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.384 qpair failed and we were unable to recover it. 00:35:42.384 [2024-06-09 09:14:04.706050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.706086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.706630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.706659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.707029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.707056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.707555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.707583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.707973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.708000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.708482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.708509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.708973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.709000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.709480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.709509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.710023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.710050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.710547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.710574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.711060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.711087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.711581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.711610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.712102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.712130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.712638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.712666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.713144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.713172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.713651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.713681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.714164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.714191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.714759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.714846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.715426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.715462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.715964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.715992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.716597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.716684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.717256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.717291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.717772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.717803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.718279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.718307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.718787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.718815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.719306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.719334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.719840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.719868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.720298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.720326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.720616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.720645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.721131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.721158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.721651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.721679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.722182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.722208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.722802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.722890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.723636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.723722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.385 [2024-06-09 09:14:04.724311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.385 [2024-06-09 09:14:04.724345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.385 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.724881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.724911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.725419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.725449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.725937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.725965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.726339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.726366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.726932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.727018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.727688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.727786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.728356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.728390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.728904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.728933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.729423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.729453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.729909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.729936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.730456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.730496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.731012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.731040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.731640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.731726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.732172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.732208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.732794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.732881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.733642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.733729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.734301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.734337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.734896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.734928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.735283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.735311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.735889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.735920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.736315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.736343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.736832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.736861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.737215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.737242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.737845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.737933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.738603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.738689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.739263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.739297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.739786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.739819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.740316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.740345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.740844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.740873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.741356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.741383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.741860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.741888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.742262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.742289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.742807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.742837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.743315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.743343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.743835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.386 [2024-06-09 09:14:04.743865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.386 qpair failed and we were unable to recover it. 00:35:42.386 [2024-06-09 09:14:04.744251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.744288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.744671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.744709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.745068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.745097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.745630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.745659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.746137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.746165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.746661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.746689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.747173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.747200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.747695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.747723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.748218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.748245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.748725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.748753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.749241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.749276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.387 qpair failed and we were unable to recover it. 00:35:42.387 [2024-06-09 09:14:04.749770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.387 [2024-06-09 09:14:04.749799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.750185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.750212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.750700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.750728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.751206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.751234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.751715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.751802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.752382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.752431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.752928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.752956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.753633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.753720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.754309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.754344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.754860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.754890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.755376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.755415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.755916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.755944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.756290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.756317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.756687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.756719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.757102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.757140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.757566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.757602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.758090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.758119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.758615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.388 [2024-06-09 09:14:04.758644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.388 qpair failed and we were unable to recover it. 00:35:42.388 [2024-06-09 09:14:04.759142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.759169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.759648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.759675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.760050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.760078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.760570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.760599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.760986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.761012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.761514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.761542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.762011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.762038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.762526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.762555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.763033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.763061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.763558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.763585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.764065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.764092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.764558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.764585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.765073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.765100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.765582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.765609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.766103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.766130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.766627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.766655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.767045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.767072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.767558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.767586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.768054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.768082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.768590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.768619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.769012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.769038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.769532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.769566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.770060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.770087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.770568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.770595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.771092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.771119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.771615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.771643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.772121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.772148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.772649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.772678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.773146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.773173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.773742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.773830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.774419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.774455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.774951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.774979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.775627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.775716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.776288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.776322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.776814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.776846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.389 [2024-06-09 09:14:04.777398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.389 [2024-06-09 09:14:04.777439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.389 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.777944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.777972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.778633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.778720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.779305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.779340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.779870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.779899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.780384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.780425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.780973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.781000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.781634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.781721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.782302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.782337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.782920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.783005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.783478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.783524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.784037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.784067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.784537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.784570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.785074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.785103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.785474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.785504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.785980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.786008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.786386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.786430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.786907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.786935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.787419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.787448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.787948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.787975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.788456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.788484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.788962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.788989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.789487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.789517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.790023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.790049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.790532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.790560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.791066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.791093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.791606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.791635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.792137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.792165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.792645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.792673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.793174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.793201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.793821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.793909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.794465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.794516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.795034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.795064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.795486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.795515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.795924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.795952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.796436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.390 [2024-06-09 09:14:04.796466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.390 qpair failed and we were unable to recover it. 00:35:42.390 [2024-06-09 09:14:04.796841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.796869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.797336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.797363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.797894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.797924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.798281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.798308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.798704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.798734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.799245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.799272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.799770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.799797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.800188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.800216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.800606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.800634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.801106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.801133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.801633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.801660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.802172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.802199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.802616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.802645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.803143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.803170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.803739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.803826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.804438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.804474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.805006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.805036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.805458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.805516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.806031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.806058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.806673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.806760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.807303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.807338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.807849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.807878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.808382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.808419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.808891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.808919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.809311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.809338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.809971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.810057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.810685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.810771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.811242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.811279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.811782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.811812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.812307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.812335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.812822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.812851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.813256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.813285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.813790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.813819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.814301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.814328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.814833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.814862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.815342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.815369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.815856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.391 [2024-06-09 09:14:04.815884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.391 qpair failed and we were unable to recover it. 00:35:42.391 [2024-06-09 09:14:04.816380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.816413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.816893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.816921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.817322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.817348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.817921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.818007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.818516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.818554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.819032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.819060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.819658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.819744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.820340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.820376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.820890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.820922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.821418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.821449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.821979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.822006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.822593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.822679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.823244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.823278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.823820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.823850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.824361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.824389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.824764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.824792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.825342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.825370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.825854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.825883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.826384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.826423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.826899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.826927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.827415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.827455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.827957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.827985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.828466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.828496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.828958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.828985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.829600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.829688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.830274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.830308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.830822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.830852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.831236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.831264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.831751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.831779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.832337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.832365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.832856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.832885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.833375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.833411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.833931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.833959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.834307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.834334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.834741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.834800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.835112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.392 [2024-06-09 09:14:04.835146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.392 qpair failed and we were unable to recover it. 00:35:42.392 [2024-06-09 09:14:04.835625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.835654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.836136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.836163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.836640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.836669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.837074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.837101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.837582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.837611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.838093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.838120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.838618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.838647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.839127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.839155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.839550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.839580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.840048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.840075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.840560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.840588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.841067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.841094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.841593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.841621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.842151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.842179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.842660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.842689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.843187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.843214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.843864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.843951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.844599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.844686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.845239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.845274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.845769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.845800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.846314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.846341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.846844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.846873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.847354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.847382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.847861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.847890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.848387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.848474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.848952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.848980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.849489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.393 [2024-06-09 09:14:04.849518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.393 qpair failed and we were unable to recover it. 00:35:42.393 [2024-06-09 09:14:04.850023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.850050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.850617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.850703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.851173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.851208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.851731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.851819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.852282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.852316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.852788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.852819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.853311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.853339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.853826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.853856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.854347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.854374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.854857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.854886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.855363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.855390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.855879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.855908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.856415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.856445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.856970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.856997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.857470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.857499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.858000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.858027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.858594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.858680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.859261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.859296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.859779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.859809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.860291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.860319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.860808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.860837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.861335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.861363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.861862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.861892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.862389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.862427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.862937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.862966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.863462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.863504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.864004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.864031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.864629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.864717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.865312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.865346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.865874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.865905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.866377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.866421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.866903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.866931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.867320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.867348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.867754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.867788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.868292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.868320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.868819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.868849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.394 [2024-06-09 09:14:04.869341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.394 [2024-06-09 09:14:04.869369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.394 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.869859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.869898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.870397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.870442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.870955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.870983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.871660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.871748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.872335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.872369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.872888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.872919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.873600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.873688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.874271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.874306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.874654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.874684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.875067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.875107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.875610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.875639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.876138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.876165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.876753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.876839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.877432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.877469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.877847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.877877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.878364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.878392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.878821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.878860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.879341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.879369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.879854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.879884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.880378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.880416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.880889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.880916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.881396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.881434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.881932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.881960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.882635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.882722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.883307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.883342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.883953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.884039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.884697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.884783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.885363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.885414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.885925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.885954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.886472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.886516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.887095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.887123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.887707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.887794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.888372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.888421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.888925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.888954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.889663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.889752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.395 qpair failed and we were unable to recover it. 00:35:42.395 [2024-06-09 09:14:04.890301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-06-09 09:14:04.890335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.890878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.890909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.891391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.891431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.891986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.892013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.892626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.892716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.893342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.893385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.893888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.893917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.894477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.894521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.895019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.895046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.895662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.895752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.896167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.896201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.896702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.896732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.897306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.897333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.897838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.897867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.898255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.898283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.898702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.898744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.899225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.899252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.899733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.899761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.900256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.900283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.900776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.900804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.901284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.901311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.901845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.901874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.902249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.902276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.902685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.902715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.903233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.903260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.903756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.903785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.904271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.904299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.904850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.904878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.905347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.905374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.905911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.905940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.906466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.906507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.907009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.907038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.907519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.907547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.396 [2024-06-09 09:14:04.908035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-06-09 09:14:04.908062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.396 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.908562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.908590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.909062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.909089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.909570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.909597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.910093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.910120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.910604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.910633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.911020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.911047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.911553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.911580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.912099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.912127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.912531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.912559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.913056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.913082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.913576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.913604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.914115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.914147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.914530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.914559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.915059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.915086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.915593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.915620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.916123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.916149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.916636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.916663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.917166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.917192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.917855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.917943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.918371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.918429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.918983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.919014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.919610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.919696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.920273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.920309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.920786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.920818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.921197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.921225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.921730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.921819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.922396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.922446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.922964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.922993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.923590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.923677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.924258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.924293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.924782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.924813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.397 [2024-06-09 09:14:04.925287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-06-09 09:14:04.925321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.397 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.925882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.925912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.926392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.926442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.926968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.926996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.927592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.927679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.928275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.928310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.928790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.928820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.929325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.929353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.929856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.929885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.930364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.930392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.930792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.930820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.931296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.931324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.931797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.931826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.932301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.932328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.932815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.932843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.933358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.933385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.933884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.933912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.934422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.934450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.934944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.934984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.667 qpair failed and we were unable to recover it. 00:35:42.667 [2024-06-09 09:14:04.935454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.667 [2024-06-09 09:14:04.935483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.935968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.936002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.936507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.936535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.937013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.937040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.937438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.937465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.937988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.938015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.938513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.938541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.939006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.939033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.939527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.939557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.940030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.940059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.940561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.940591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.941041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.941067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.941574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.941602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.942093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.942120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.942600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.942628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.943123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.943151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.943639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.943668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.944141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.944168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.944757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.944844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.945447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.945483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.945972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.946001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.946497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.946525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.946926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.946958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.947440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.947470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.947977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.948004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.948489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.948518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.949027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.949054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.949428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.949457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.949968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.949996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.950475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.950503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.951005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.951032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.951512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.951541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.952018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.952045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.952545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.952573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.668 [2024-06-09 09:14:04.952956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.668 [2024-06-09 09:14:04.952983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.668 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.953499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.953527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.954010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.954037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.954545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.954572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.954827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.954859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.955339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.955367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.955861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.955891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.956395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.956449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.956946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.956974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.957487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.957517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.957993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.958021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.958520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.958548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.959058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.959085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.959596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.959625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.960104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.960131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.960637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.960665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.961173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.961201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.961684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.961770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.962209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.962245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.962722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.962754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.963284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.963313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.963841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.963871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.964339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.964368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.964886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.964917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.965391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.965433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.965925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.965953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.966461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.966503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.967013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.967041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.967657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.967744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.968296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.968331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.968656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.968686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.969065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.969098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.969586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.969615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.970123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.970151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.970661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.970692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.971205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.971233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.971813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.971901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.972440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.972478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.669 qpair failed and we were unable to recover it. 00:35:42.669 [2024-06-09 09:14:04.972835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.669 [2024-06-09 09:14:04.972864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.973212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.973239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.973811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.973840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.974345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.974374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.974831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.974862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.975369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.975397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.975892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.975920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.976427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.976456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.976845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.976873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.977358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.977396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.977911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.977939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.978454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.978494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.978895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.978922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.979420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.979449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.979927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.979954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.980343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.980370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.980885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.980915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.981411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.981440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.981955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.981982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.982470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.982498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.982970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.982997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.983634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.983721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.984189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.984224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.984806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.984838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.985315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.985348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.985871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.985900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.986372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.986399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.986875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.986903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.987293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.987321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.987887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.987977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.988682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.988769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.989323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.989357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.989895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.989925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.990457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.990499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.670 [2024-06-09 09:14:04.990931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.670 [2024-06-09 09:14:04.990960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.670 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.991326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.991353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.991787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.991828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.992217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.992246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.992658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.992688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.993205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.993232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.993726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.993755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.994261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.994288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.994775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.994805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.995178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.995212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.995710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.995739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.996324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.996351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.996861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.996889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.998454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.998511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.999027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.999056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:04.999543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:04.999581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.000080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.000108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.000500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.000532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.001052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.001081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.001578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.001608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.002092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.002119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.002615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.002643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.003185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.003212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.003790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.003879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.004386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.004435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.004954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.004987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.005502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.005532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.006622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.006671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.007164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.007192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.007747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.007778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.008301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.008329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.008843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.008871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.009400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.009453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.009946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.009973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.010461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.010502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.011016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.011043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.011527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.011556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.012060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.671 [2024-06-09 09:14:05.012087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.671 qpair failed and we were unable to recover it. 00:35:42.671 [2024-06-09 09:14:05.012588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.012616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.013100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.013127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.013600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.013627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.014164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.014191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.014771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.014857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.015439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.015476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.015995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.016024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.016397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.016436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.016934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.016962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.017639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.017725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.018276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.018310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.018694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.018726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.019212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.019239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.019735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.019766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.020240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.020267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.020732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.020761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.020944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.020970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.021484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.021523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.022009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.022037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.022510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.022539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.023033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.023060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.023560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.023589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.024073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.672 [2024-06-09 09:14:05.024100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.672 qpair failed and we were unable to recover it. 00:35:42.672 [2024-06-09 09:14:05.024592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.024619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.025114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.025141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.025521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.025549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.026030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.026057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.026554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.026582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.027066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.027093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.027573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.027601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.028105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.028132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.028613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.028641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.029113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.029140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.029539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.029572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.030050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.030077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.030578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.030606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.030972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.031009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.031502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.031531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.032009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.032037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.032413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.032442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.032939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.032966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.033347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.033374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.033918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.033947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.034436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.034465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.034961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.034989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.035487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.035519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.036063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.036093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.036596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.036624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.037196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.037224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.037794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.037881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.038640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.038726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.039302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.039338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.039819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.039850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.040326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.040353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.040851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.040880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.041383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.041418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.041922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.041950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.042467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.042519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.673 [2024-06-09 09:14:05.043038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.673 [2024-06-09 09:14:05.043066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.673 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.043425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.043456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.043974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.044001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.044634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.044721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.045298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.045333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.045853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.045884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.046369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.046397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.046786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.046814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.047328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.047356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.047927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.048015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.048679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.048768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.049348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.049383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.049903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.049931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.050446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.050478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.050958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.050986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.051457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.051487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.051977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.052004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.052504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.052534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.053026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.053053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.053604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.053632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.054107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.054135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.054614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.054641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.055124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.055153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.055671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.055700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.056197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.056225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.056790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.056877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.057465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.057517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.058042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.058070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.058578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.058608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.059103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.059131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.059607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.059636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.060030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.060071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.060445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.060482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.060995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.061023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.061575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.061604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.062079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.062106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.674 [2024-06-09 09:14:05.062599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.674 [2024-06-09 09:14:05.062627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.674 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.063107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.063134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.063629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.063657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.064142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.064169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.064672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.064702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.065171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.065198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.065769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.065855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.066440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.066476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.066987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.067016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.067594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.067682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.068274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.068308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.068782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.068814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.069202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.069230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.069821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.069909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.070594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.070682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.071268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.071303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.071803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.071832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.072326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.072355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.072872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.072901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.073337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.073364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.073776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.073811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.074329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.074358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.074893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.074923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.075422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.075452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.075976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.076004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.076486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.076515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.077012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.077040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.077641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.077728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.078309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.078344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.078869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.078900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.079390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.079435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.079901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.079929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.080428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.080459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.080964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.080992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.081379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.081413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.082013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.082099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.082781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.082868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.083450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.675 [2024-06-09 09:14:05.083487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.675 qpair failed and we were unable to recover it. 00:35:42.675 [2024-06-09 09:14:05.083968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.083996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.084628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.084718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.085191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.085242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.085731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.085761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.086236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.086263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.086761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.086791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.087293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.087322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.087828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.087858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.088363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.088391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.088890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.088918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.089421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.089450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.089964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.089991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.090591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.090678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.091192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.091227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.091734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.091820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.092301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.092341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.092849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.092881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.093200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.093228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.093803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.093890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.094667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.094755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.095337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.095372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.095857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.095888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.096367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.096395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.096912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.096944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.097457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.097498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.098016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.098043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.098523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.098554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.099044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.099072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.099467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.099495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.099886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.099922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.100311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.100340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.100750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.100788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.101274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.101312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.101783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.101812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.102283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.102311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.676 [2024-06-09 09:14:05.102851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.676 [2024-06-09 09:14:05.102879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.676 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.103371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.103399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.103898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.103926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.104416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.104445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.104962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.104990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.105592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.105677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.106222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.106258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.106730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.106761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.107246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.107273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.107756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.107784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.108276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.108304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.108830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.108860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.109331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.109358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.109839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.109869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.110258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.110286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.110682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.110711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.111120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.111147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.111540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.111578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.111948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.111980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.112484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.112513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.113015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.113043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.113526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.113554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.114054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.114082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.114564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.114592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.115071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.115099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.115592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.115620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.677 [2024-06-09 09:14:05.116119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.677 [2024-06-09 09:14:05.116146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.677 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.116668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.116696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.117189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.117217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.117786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.117872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.118465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.118502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.119001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.119031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.119506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.119538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.120002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.120029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.120550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.120579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.121073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.121101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.121639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.121668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.122147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.122184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.122598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.122628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.123011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.123050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.123617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.123649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.124136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.124164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.124571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.124599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.125084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.125111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.125514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.125542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.126095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.126122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.126620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.126649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.127145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.127172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.127758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.127846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.128379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.128442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.128944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.128971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.129609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.129697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.130254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.130288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.130819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.130851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.131297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.131324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.131854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.131883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.132290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.132319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.132779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.132808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.133351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.133379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.133880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.133908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.134389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.134428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.678 [2024-06-09 09:14:05.134828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.678 [2024-06-09 09:14:05.134856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.678 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.135339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.135369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.135915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.135944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.136486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.136516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.136996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.137023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.137677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.137766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.138322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.138357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.138882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.138914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.139458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.139499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.139896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.139929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.140343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.140371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.140886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.140916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.141296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.141323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.141690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.141723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.142205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.142233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.142750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.142781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.143300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.143339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.143861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.143890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.144361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.144389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.144871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.144910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.145424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.145454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.145961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.145989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.146640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.146726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.147314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.147349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.147847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.147877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.148366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.148394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.148930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.148959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.149327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.149354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.149909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.149940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.150437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.150470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.150871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.150899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.151431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.151459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.151950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.151978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.152336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.679 [2024-06-09 09:14:05.152364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.679 qpair failed and we were unable to recover it. 00:35:42.679 [2024-06-09 09:14:05.152871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.152899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.153390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.153425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.153904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.153931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.154440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.154472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.154948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.154975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.155360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.155386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.155932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.155960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.156455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.156495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.157001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.157029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.157516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.157545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.158037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.158064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.158564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.158593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.159095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.159123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.159593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.159621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.160118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.160146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.160553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.160580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.161051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.161077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.161573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.161601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.162082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.162109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.162587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.162617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.163148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.163176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.163760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.163847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.164339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.164384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.164897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.164926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.165415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.165444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.165970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.165999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.166605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.166693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.167277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.167312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.167849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.167880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.168385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.168424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.168828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.168857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.169242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.169271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.169773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.169801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.680 [2024-06-09 09:14:05.170184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.680 [2024-06-09 09:14:05.170222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.680 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.170741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.170771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.171140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.171174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.171671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.171701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.172178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.172205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.172726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.172755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.173238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.173265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.173649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.173676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.174038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.174066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.174527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.174557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.175062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.175089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.175593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.175621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.176100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.176127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.176500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.176528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.177023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.177050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.177436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.177464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.177945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.177973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.178462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.178491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.178995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.179022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.179577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.179605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.180114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.180141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.180636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.180664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.181140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.181167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.181733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.181762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.182241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.182268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.182777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.182805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.183304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.183332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.183807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.183836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.184338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.184366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.184877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.184911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.185412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.185444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.185937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.185965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.186642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.186728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.187314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.187349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.187861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.187891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.188380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.188430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.188649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.188677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.189025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.681 [2024-06-09 09:14:05.189066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.681 qpair failed and we were unable to recover it. 00:35:42.681 [2024-06-09 09:14:05.189472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.189501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.189996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.190024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.190505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.190535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.191036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.191063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.191525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.191553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.192060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.192088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.192584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.192612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.193116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.193143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.193537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.193573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.193963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.193995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.194480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.194510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.195087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.195114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.195592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.195620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.196099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.196126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.196556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.196585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.197064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.197091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.197589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.197617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.198100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.198127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.198601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.198630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.199126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.199154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.199666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.199694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.200205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.200232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.200812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.200900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.201599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.201685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.202259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.202293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.202773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.202806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.203188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.203215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.203703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.203734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.204118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.204145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.204738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.204826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.682 [2024-06-09 09:14:05.205414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.682 [2024-06-09 09:14:05.205451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.682 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.205947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.205986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.206465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.206508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.207054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.207081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.207700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.207787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.208372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.208420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.208924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.208953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.209340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.209368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.209901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.209931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.210416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.210446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.210966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.210994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.211462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.211503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.212002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.212029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.212699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.212789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.213372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.213421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.213920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.213950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.215704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.215759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.216276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.216306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.683 [2024-06-09 09:14:05.216931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.683 [2024-06-09 09:14:05.217019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.683 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.217693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.217781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.218369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.218420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.218914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.218943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.219432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.219464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.219978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.220006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.220354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.220382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.220935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.220964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.221471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.221514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.222015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.222043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.222594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.222625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.223098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.223125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.223630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.223658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.224138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.224166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.224738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.224825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.225446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.225484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.956 [2024-06-09 09:14:05.225993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.956 [2024-06-09 09:14:05.226022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.956 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.226508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.226537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.226927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.226955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.227435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.227464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.227946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.227976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.228471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.228499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.229012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.229040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.229533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.229577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.230093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.230120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.230601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.230630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.231109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.231137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.231507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.231535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.232119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.232147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.232649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.232677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.233173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.233200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.233760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.233846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.234432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.234468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.234982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.235010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.235400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.235445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.235972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.236001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.236621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.236708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.237297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.237332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.237853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.237885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.238357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.238384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.238873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.238902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.239372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.239400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.239893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.239921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.240398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.240432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.240931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.240958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.241282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.241310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.241789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.241875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.242372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.242429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.242912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.242941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 [2024-06-09 09:14:05.243430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.957 [2024-06-09 09:14:05.243460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e00000b90 with addr=10.0.0.2, port=4420 00:35:42.957 qpair failed and we were unable to recover it. 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Write completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.957 Read completed with error (sct=0, sc=8) 00:35:42.957 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Read completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Read completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Read completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Read completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Read completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Read completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Write completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Read completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 Read completed with error (sct=0, sc=8) 00:35:42.958 starting I/O failed 00:35:42.958 [2024-06-09 09:14:05.243750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.958 [2024-06-09 09:14:05.244203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.244218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.244483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.244502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.245065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.245101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.245512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.245526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.246101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.246138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.246710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.246747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.247278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.247290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.247831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.247869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.248374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.248386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.248968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.249006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.249632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.249669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.250178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.250190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.250719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.250757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.251238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.251250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.251799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.251837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.252370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.252383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.252997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.253035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.253644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.253682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.253951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.253967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.254324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.254334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.254799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.254814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.255267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.255277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.255913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.255950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.256214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.256227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.256773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.256811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.257187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.257199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.257761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.257798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.258281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.258293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.958 [2024-06-09 09:14:05.258812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.958 [2024-06-09 09:14:05.258822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.958 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.259283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.259292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.259820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.259829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.260290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.260299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.260806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.260816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.261276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.261285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.261623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.261635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.262148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.262159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.262703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.262740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.263223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.263235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.263780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.263817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.264210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.264222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.264711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.264748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.265243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.265255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.265852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.265889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.266376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.266388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.266920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.266957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.267626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.267663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.268147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.268159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.268696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.268738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.269221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.269233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.269802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.269840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.270387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.270398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.270937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.270974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.271159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.271170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.271710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.271747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.272243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.272255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.272819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.272856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.273353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.273365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.273949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.273986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.274624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.274661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.275158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.275170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.275699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.275736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.276244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.959 [2024-06-09 09:14:05.276256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.959 qpair failed and we were unable to recover it. 00:35:42.959 [2024-06-09 09:14:05.276804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.276842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.277306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.277318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.277833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.277844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.278301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.278311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.278930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.278968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.279356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.279368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.279931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.279968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.280597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.280635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.281029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.281040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.281481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.281492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.281966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.281975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.282424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.282434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.282876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.282890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.283324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.283334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.283694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.283704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.284040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.284050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.284525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.284535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.284979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.284988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.285333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.285343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.285808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.285818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.286262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.286271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.286670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.286680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.287151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.287162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.287698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.287736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.288204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.288216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.288699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.288736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.289218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.289230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.289668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.289706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.290204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.290216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.290698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.290735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.291230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.291242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.291701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.291738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.292243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.292256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.292808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.292846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.293316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.293328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.293933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.293970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.960 qpair failed and we were unable to recover it. 00:35:42.960 [2024-06-09 09:14:05.294605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.960 [2024-06-09 09:14:05.294642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.295084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.295096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.295632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.295669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.296162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.296174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.296612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.296623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.297105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.297114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.297632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.297669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.298160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.298172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.298716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.298755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.299223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.299236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.299601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.299639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.300129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.300141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.300703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.300740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.301229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.301240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.301789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.301827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.302329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.302341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.302869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.302880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.303314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.303329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.303884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.303922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.304389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.304410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.304854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.304865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.305406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.305417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.305925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.305963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.306587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.306624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.307107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.307119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.307692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.307730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.308214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.308226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.308700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.308738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.309228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.961 [2024-06-09 09:14:05.309240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.961 qpair failed and we were unable to recover it. 00:35:42.961 [2024-06-09 09:14:05.309784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.309820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.310309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.310321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.310892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.310930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.311413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.311426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.311914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.311925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.312382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.312391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.312842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.312878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.313266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.313278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.313876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.313914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.314395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.314415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.314946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.314983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.315588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.315625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.316111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.316124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.316691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.316728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.317224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.317235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.317780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.317821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.318182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.318194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.318682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.318719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.319163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.319175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.319703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.319740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.320242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.320254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.320800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.320837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.321325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.321337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.321763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.321800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.322289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.322302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.322773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.322783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.323218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.323228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.323791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.323829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.324320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.324332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.324830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.324840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.325349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.325358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.325895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.325932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.326611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.326649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.327137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.327148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.327672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.327709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.328198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.962 [2024-06-09 09:14:05.328211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.962 qpair failed and we were unable to recover it. 00:35:42.962 [2024-06-09 09:14:05.328783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.328820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.329306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.329318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.329715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.329726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.330173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.330183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.330809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.330847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.331376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.331389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.331932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.331973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.332566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.332603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.333100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.333112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.333354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.333370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.333847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.333858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.334311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.334320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.334830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.334839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.335273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.335282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.335806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.335816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.336296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.336306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.336770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.336780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.337137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.337147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.337713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.337750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.338244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.338256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.338789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.338826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.339312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.339325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.339774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.339784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.340255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.340264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.340794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.340831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.341319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.341331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.341956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.341994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.342621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.342658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.343155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.343167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.343737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.343774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.344269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.344281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.344723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.344734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.345192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.345202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.345792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.345829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.346102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.346118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.346621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.963 [2024-06-09 09:14:05.346631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.963 qpair failed and we were unable to recover it. 00:35:42.963 [2024-06-09 09:14:05.347075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.347085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.347626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.347663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.348156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.348168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.348701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.348738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.349228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.349240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.349769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.349806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.350366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.350378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.350828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.350866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.351336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.351348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.351806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.351817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.352252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.352262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.352890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.352927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.353609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.353646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.354137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.354149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.354677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.354714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.355109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.355121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.355704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.355742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.356173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.356185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.356713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.356751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.357141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.357152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.357678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.357715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.358201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.358214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.358801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.358838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.359352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.359364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.359853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.359864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.360183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.360194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.360644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.360682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.361061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.361073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.361624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.361661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.362165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.362178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.362764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.362802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.363295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.363306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.363768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.363778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.364260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.364269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.364796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.364834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.365301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.365316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.365784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.365795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.964 qpair failed and we were unable to recover it. 00:35:42.964 [2024-06-09 09:14:05.366241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.964 [2024-06-09 09:14:05.366250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.366821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.366863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.367352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.367364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.367905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.367943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.368609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.368647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.369132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.369143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.369787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.369825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.370306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.370318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.370893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.370903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.371333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.371342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.371992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.372029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.372624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.372662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.373169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.373181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.373708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.373745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.374236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.374247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.374841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.374879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.375375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.375386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.375937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.375975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.376353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.376365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.376941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.376978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.377620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.377657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.378147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.378159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.378644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.378680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.379212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.379225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.379808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.379845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.380357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.380368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.380993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.381030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.381385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.381396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.381970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.382011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.382670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.382706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.383099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.383111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.383721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.383757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.384247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.384259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.384809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.384846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.385387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.385399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.385900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.385936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.965 [2024-06-09 09:14:05.386328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.965 [2024-06-09 09:14:05.386341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.965 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.386895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.386931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.387329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.387341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.387793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.387803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.388241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.388251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.388693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.388730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.389222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.389234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.389693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.389730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.390228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.390239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.390702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.390739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.391147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.391159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.391737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.391773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.392235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.392247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.392689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.392725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.393214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.393226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.393797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.393833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.394319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.394330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.394757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.394794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.395323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.395335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.395582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.395599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.396066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.396076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.396519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.396529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.396894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.396904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.397362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.397372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.397838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.397848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.398285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.398294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.398669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.398679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.399044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.399053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.399507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.399517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.399965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.966 [2024-06-09 09:14:05.399975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.966 qpair failed and we were unable to recover it. 00:35:42.966 [2024-06-09 09:14:05.400301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.400311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.400777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.400788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.401231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.401240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.401699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.401709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.402051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.402061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.402501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.402510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.402902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.402911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.403259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.403269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.403795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.403832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.404391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.404410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.404987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.405023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.405635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.405672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.406166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.406178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.406716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.406752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.407145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.407156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.407689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.407726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.408229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.408240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.408719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.408755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.409252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.409263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.409853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.409890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.410381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.410392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.410858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.410894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.411615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.411652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.412195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.412207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.412766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.412803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.413330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.413342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.413938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.413974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.414283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.414300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.414776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.414787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.415232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.415242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.415804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.415845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.416199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.416213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.416791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.416828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.417320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.417331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.417749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.417760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.967 [2024-06-09 09:14:05.418195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.967 [2024-06-09 09:14:05.418204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.967 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.418675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.418712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.419093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.419105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.419661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.419697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.420212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.420224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.420775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.420811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.421300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.421312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.421900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.421910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.422351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.422360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.422959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.422996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.423656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.423693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.424150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.424162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.424723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.424760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.425240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.425252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.425816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.425853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.426338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.426350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.426925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.426962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.427597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.427634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.428120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.428132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.428683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.428720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.429212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.429224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.429803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.429840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.430327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.430343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.430715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.430726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.431091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.431101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.431596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.431633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.432121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.432132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.432665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.432702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.433046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.433060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.433630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.433640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.434112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.434121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.434688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.434725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.435177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.435188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.435723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.435761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.436243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.436256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.968 qpair failed and we were unable to recover it. 00:35:42.968 [2024-06-09 09:14:05.436805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.968 [2024-06-09 09:14:05.436841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.437282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.437294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.437734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.437744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.438064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.438074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.438409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.438419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.438947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.438957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.439395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.439409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.439766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.439776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.440208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.440217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.440707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.440743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.441228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.441240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.441781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.441817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.442315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.442327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.442809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.442820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.443305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.443321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.443881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.443917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.444279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.444291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.444760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.444770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.445250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.445260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.445831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.445868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.446337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.446349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.446939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.446976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.447601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.447638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.448129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.448141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.448606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.448643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.449171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.449183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.449764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.449800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.450291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.450303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.450810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.450821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.451282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.451292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.451677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.451687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.452173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.452184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.452762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.452799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.453291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.453303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.453810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.453821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.454264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.969 [2024-06-09 09:14:05.454274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.969 qpair failed and we were unable to recover it. 00:35:42.969 [2024-06-09 09:14:05.454816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.454853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.455216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.455228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.455788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.455825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.456189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.456201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.456644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.456680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.457172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.457184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.457756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.457793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.458288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.458300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.458738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.458748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.459189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.459200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.459805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.459841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.460330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.460342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.460846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.460856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.461296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.461307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.461856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.461893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.462298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.462310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.462770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.462780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.463017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.463033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.463517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.463528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.463970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.463980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.464416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.464426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.464925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.464934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.465411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.465421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.465961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.465971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.466418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.466431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.466893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.466902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.467237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.467247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.467601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.467611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.468037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.468046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.468482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.468492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.468937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.468946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.469380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.469389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.469929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.469940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.470606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.470643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.471131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.471142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.471680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.471717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.970 qpair failed and we were unable to recover it. 00:35:42.970 [2024-06-09 09:14:05.472182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.970 [2024-06-09 09:14:05.472195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.472756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.472793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.473277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.473288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.473604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.473615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.474070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.474080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.474512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.474523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.474968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.474978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.475413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.475423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.475915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.475924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.476262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.476271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.476720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.476735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.477193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.477203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.477747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.477784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.478276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.478288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.478741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.478752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.479234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.479244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.479807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.479843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.480332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.480344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.480810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.480846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.481336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.481349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.481811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.481823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.482287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.482297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.482767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.482777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.483207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.483217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.483708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.483744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.484013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.484030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.484503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.484514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.484877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.484886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.485245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.485254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.485757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.485767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.971 [2024-06-09 09:14:05.486212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.971 [2024-06-09 09:14:05.486222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.971 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.486746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.486782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.487130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.487143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.487711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.487747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.488232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.488244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.488769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.488806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.489379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.489391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.489918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.489959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.490608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.490645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.491135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.491147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.491707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.491744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.492247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.492260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.492821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.492858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.493350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.493361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.493907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.493944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.494597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.494634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.495128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.495139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.495715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.495752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.496136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.496148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.496722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.496759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.497242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.497254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.497704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.497740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.498131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.498143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.498688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.498725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.498973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.498986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.499448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.499458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.499910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.499920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.500353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.500362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.500826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.500836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.501302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.501312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.501771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.501780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:42.972 [2024-06-09 09:14:05.502226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.972 [2024-06-09 09:14:05.502236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:42.972 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.502793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.502831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.503324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.503337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.503855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.503865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.504358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.504368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.504843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.504853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.505312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.505321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.505671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.505681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.506155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.506164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.506723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.506760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.507123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.507134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.507750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.507786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.508359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.508371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.508751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.508762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.509223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.509232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.509686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.509722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.510191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.510204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.264 [2024-06-09 09:14:05.510753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.264 [2024-06-09 09:14:05.510790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.264 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.511291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.511303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.511660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.511670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.512165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.512176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.512695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.512732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.513226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.513237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.513708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.513745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.514255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.514268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.514848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.514885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.515337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.515349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.515950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.515987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.516594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.516631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.517124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.517136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.517702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.517739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.518230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.518242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.518800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.518836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.519376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.519388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.519813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.519851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.520335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.520348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.520730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.520741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.521218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.521228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.521647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.521684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.522216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.522228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.522835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.522872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.523391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.523408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.524029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.524066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.524653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.524690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.525178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.525194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.525813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.525850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.526331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.526343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.526956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.526993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.527627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.527663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.527866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.527878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.528314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.528324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.528792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.528802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.529282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.529291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.529689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.529699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.530141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.530150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.530632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.265 [2024-06-09 09:14:05.530642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.265 qpair failed and we were unable to recover it. 00:35:43.265 [2024-06-09 09:14:05.531076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.531085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.531626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.531663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.532034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.532046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.532507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.532518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.532985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.532995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.533342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.533351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.533799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.533809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.534261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.534270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.534833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.534870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.535411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.535423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.535863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.535872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.536320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.536329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.536931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.536968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.537676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.537713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.538238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.538250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.538736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.538778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.539246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.539258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.539818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.539855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.540331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.540344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.540816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.540853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.541322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.541335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.541805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.541816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.542282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.542292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.542779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.542790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.543313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.543323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.543901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.543938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.544381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.544393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.544847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.544858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.545106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.545122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.545717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.545754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.546228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.546240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.546771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.546807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.547327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.547338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.547847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.547857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.548288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.548298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.548942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.548978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.549634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.549671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.550176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.550187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.266 [2024-06-09 09:14:05.550728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.266 [2024-06-09 09:14:05.550765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.266 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.551281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.551293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.551768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.551778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.552259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.552268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.552750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.552792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.553314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.553326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.553817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.553854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.554310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.554323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.554729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.554740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.555239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.555249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.555702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.555739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.556227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.556239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.556790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.556827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.557308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.557320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.557882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.557918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.558436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.558459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.558999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.559010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.559473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.559483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.559941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.559951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.560393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.560406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.560893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.560903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.561332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.561341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.561843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.561853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.562368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.562377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.562922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.562959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.563611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.563647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.564141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.564152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.564693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.564730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.565201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.565213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.565795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.565832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.566321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.566333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.566848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.566858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.567213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.567223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.567701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.567738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.568252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.568264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.568831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.568868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.569361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.569373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.569938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.569974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.570266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.570277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.267 qpair failed and we were unable to recover it. 00:35:43.267 [2024-06-09 09:14:05.570857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.267 [2024-06-09 09:14:05.570893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.571619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.571656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.571830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.571846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.572222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.572232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.572705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.572716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.573158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.573168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.573711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.573748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.574289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.574301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.574764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.574775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.575226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.575236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.575791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.575827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.576322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.576334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.576792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.576802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.577154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.577164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.577714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.577750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.578263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.578276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.578759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.578771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.579236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.579247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.579664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.579701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.580173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.580185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.580725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.580762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.581249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.581260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.581722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.581758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.582251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.582263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.582819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.582856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.583318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.583330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.583876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.583913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.584600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.584637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.585127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.585139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.585686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.585723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.586232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.586243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.586803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.586840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.587373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.587385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.587851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.587891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.588381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.588393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.588968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.589005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.589625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.589662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.590119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.268 [2024-06-09 09:14:05.590130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.268 qpair failed and we were unable to recover it. 00:35:43.268 [2024-06-09 09:14:05.590691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.590728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.590993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.591004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.591472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.591482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.592017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.592026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.592263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.592279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.592535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.592546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.592997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.593006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.593362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.593373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.593935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.593944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.594380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.594389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.594926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.594937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.595294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.595303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.595670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.595680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.596126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.596136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.596677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.596714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.597203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.597216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.597830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.597867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.598239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.598250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.598771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.598808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.599256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.599267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.599816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.599852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.600338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.600350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.600943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.600983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.601601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.601638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.602040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.602051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.602644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.602680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.603163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.603175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.603635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.603671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.604103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.604115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.604493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.604503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.269 [2024-06-09 09:14:05.604929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.269 [2024-06-09 09:14:05.604938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.269 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.605411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.605421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.605916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.605926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.606394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.606407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.606786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.606796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.607230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.607239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.607729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.607766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.608203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.608216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.608696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.608733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.609080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.609093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.609614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.609625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.610085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.610095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.610674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.610710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.611055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.611067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.611520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.611531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.611899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.611909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.612394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.612408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.612765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.612774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.613236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.613246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.613803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.613840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.614316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.614328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.614841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.614852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.615326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.615335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.615928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.615965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.616592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.616629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.617113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.617125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.617710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.617747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.618235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.618247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.618721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.618758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.619264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.619276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.619842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.619878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.620363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.620374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.620725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.620761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.621132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.621145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.621693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.621729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.622217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.622230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.622677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.622713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.623224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.623236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.623774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.270 [2024-06-09 09:14:05.623811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.270 qpair failed and we were unable to recover it. 00:35:43.270 [2024-06-09 09:14:05.624298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.624309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.624661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.624672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.625133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.625143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.625610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.625647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.626135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.626147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.626693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.626730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.627218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.627230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.627868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.627904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.628620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.628657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.629052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.629064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.629642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.629679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.630120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.630132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.630502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.630512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.630742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.630758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.631232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.631241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.631739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.631749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.632173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.632183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.632768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.632805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.633280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.633292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.633775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.633785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.634269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.634279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.634773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.634815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.635291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.635302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.635781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.635793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.636258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.636268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.636726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.636763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.637244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.637256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.637680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.637716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.637984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.638000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.638516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.638527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.638873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.638883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.639347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.639356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.639642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.639653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.640163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.640173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.640522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.640532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.640997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.641006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.641464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.641474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.641923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.641932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.271 [2024-06-09 09:14:05.642374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.271 [2024-06-09 09:14:05.642384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.271 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.642962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.642972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.643426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.643445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.643922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.643931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.644384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.644393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.644898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.644908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.645351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.645360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.645846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.645883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.646334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.646345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.646882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.646892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.647335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.647352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.647852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.647889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.648259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.648272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.648859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.648896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.649346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.649358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.649820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.649858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.650349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.650361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.650872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.650883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.651248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.651258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.651819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.651856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.652337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.652349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.652978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.653014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.653660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.653697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.654092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.654103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.654724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.654760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.655236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.655249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.655817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.655854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.656418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.656432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.656950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.656959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.657413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.657423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.657873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.657882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.658227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.658236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.658634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.658645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.659114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.659123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.659481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.659491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.659867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.659876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.660315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.660325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.660662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.660676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.661155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.661165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.272 qpair failed and we were unable to recover it. 00:35:43.272 [2024-06-09 09:14:05.661626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.272 [2024-06-09 09:14:05.661636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.662091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.662101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.662580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.662590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.663066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.663075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.663439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.663449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.663694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.663703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.664189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.664198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.664646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.664655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.665097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.665107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.665559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.665569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.666079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.666088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.666619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.666629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.667087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.667097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.667707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.667743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.668112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.668124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.668512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.668523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.668919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.668928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.669382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.669391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.669858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.669867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.670310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.670319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.670782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.670792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.671112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.671122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.671697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.671734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.672108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.672120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.672452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.672462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.672826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.672836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.673276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.673285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.673788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.673797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.674283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.674292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.674753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.674762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.675272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.675282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.675643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.675654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.676113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.273 [2024-06-09 09:14:05.676122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.273 qpair failed and we were unable to recover it. 00:35:43.273 [2024-06-09 09:14:05.676699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.676710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.677240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.677250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.677808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.677844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.678338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.678350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.678986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.679023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.679616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.679652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.680127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.680139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.680695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.680732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.681183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.681195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.681813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.681850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.682333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.682345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.682817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.682854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.683358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.683369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.683873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.683884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.684222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.684232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.684835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.684872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.685360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.685372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.685832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.685876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.686331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.686343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.686710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.686747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.687242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.687253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.687712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.687749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.688233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.688245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.688790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.688827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.689312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.689324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.689927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.689964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.690428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.690450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.690929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.690939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.691376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.691386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.691762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.691772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.692117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.692127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.692617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.692627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.693105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.693115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.693660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.693700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.694190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.694202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.694755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.694792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.695268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.695280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.695732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.274 [2024-06-09 09:14:05.695742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.274 qpair failed and we were unable to recover it. 00:35:43.274 [2024-06-09 09:14:05.696178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.696187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.696720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.696757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.697254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.697265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.697802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.697838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.698329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.698341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.698924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.698961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.699564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.699601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.700072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.700083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.700505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.700516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.700988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.700998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.701425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.701435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.701791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.701801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.702038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.702054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.702560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.702570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.703010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.703019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.703454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.703464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.703998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.704007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.704492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.704501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.704938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.704947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.705377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.705386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.705836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.705846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.706307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.706317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.706781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.706794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.707252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.707262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.707811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.707847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.708342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.708354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.708886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.708897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.709340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.709350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.709902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.709939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.710582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.710618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.711102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.711113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.711708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.711744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.712231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.712245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.712797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.712833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.713329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.713340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.713918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.713954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.714316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.714328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.714791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.714802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.715254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.275 [2024-06-09 09:14:05.715263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.275 qpair failed and we were unable to recover it. 00:35:43.275 [2024-06-09 09:14:05.715698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.715734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.716236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.716247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.716774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.716812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.717291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.717303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.717903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.717940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.718575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.718611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.719103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.719115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.719623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.719660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.720152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.720163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.720724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.720760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.721254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.721265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.721797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.721834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.722300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.722312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.722794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.722805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.723301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.723310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.723759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.723769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.724196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.724206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.724733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.724770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.725121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.725134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.725735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.725771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.726211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.726224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.726758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.726794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.727285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.727296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.727830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.727841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.728202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.728212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.728677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.728720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.729213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.729225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.729759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.729796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.730284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.730296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.730771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.730782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.731224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.731233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.731778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.731814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.732263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.732275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.732865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.732901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.733371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.733384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.733929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.733965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.734611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.734647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.735140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.735152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.276 [2024-06-09 09:14:05.735732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.276 [2024-06-09 09:14:05.735768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.276 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.736145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.736156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.736716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.736753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.737242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.737253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.737780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.737816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.738303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.738316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.738796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.738807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.739335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.739344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.739824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.739861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.740350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.740362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.740808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.740820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.741282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.741292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.741719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.741729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.742189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.742203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.742838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.742875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.743374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.743386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.743953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.743989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.744587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.744624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.745116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.745128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.745652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.745689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.746182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.746194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.746832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.746869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.747357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.747369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.747975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.748012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.748566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.748603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.749102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.749114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.749647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.749683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.750186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.750197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.750726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.750762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.751249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.751262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.751810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.751847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.752337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.752348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.752882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.752920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.753399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.753418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.753945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.753954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.754396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.754409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.754928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.754964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.755601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.755637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.756141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.277 [2024-06-09 09:14:05.756153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.277 qpair failed and we were unable to recover it. 00:35:43.277 [2024-06-09 09:14:05.756679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.756716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.757209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.757225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.757756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.757792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.758256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.758269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.758814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.758851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.759340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.759351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.759889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.759926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.760315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.760328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.760733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.760744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.761109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.761119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.761697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.761734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.762228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.762240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.762765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.762802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.763168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.763179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.763722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.763759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.764115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.764127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.764680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.764716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.765216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.765227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.765765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.765802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.766267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.766280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.766737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.766748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.767183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.767193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.767867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.767904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.768393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.768415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.768875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.768885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.769360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.769370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.769942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.769979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.770599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.770635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.771126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.771142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.771679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.771716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.772213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.772225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.772750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.772786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.773284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.773297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.278 qpair failed and we were unable to recover it. 00:35:43.278 [2024-06-09 09:14:05.773685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.278 [2024-06-09 09:14:05.773696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.774124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.774134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.774581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.774617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.775108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.775121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.775678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.775714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.776211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.776222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.776755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.776791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.777291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.777303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.777669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.777680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.778043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.778053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.778490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.778500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.778936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.778945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.779381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.779390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.779905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.779915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.780348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.780357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.780794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.780805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.781264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.781274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.781844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.781881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.782341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.782353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.782881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.782917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.783412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.783424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.784021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.784058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.784570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.784606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.785104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.785116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.785624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.785661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.786148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.786160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.786730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.786767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.787231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.787243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.787794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.787831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.788313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.788325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.788763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.788773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.789122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.789133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.789687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.789724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.790226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.790237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.790801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.790838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.791342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.791353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.791929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.791966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.792584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.792621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.793086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.279 [2024-06-09 09:14:05.793098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.279 qpair failed and we were unable to recover it. 00:35:43.279 [2024-06-09 09:14:05.793626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.793664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.794192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.794204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.794744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.794780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.795273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.795285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.795730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.795740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.796244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.796254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.796500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.796520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.796999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.797009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.797359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.797368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.797701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.797711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.798181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.798190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.798712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.798748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.799229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.799240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.799786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.799822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.800310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.800322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.800770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.800781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.801229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.801239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.801789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.801826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.802304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.802317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.802858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.802895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.803386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.803398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.803835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.803845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.804288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.804297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.804915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.804952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.805605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.805646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.806150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.806162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.806688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.806724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.807187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.807199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.807725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.807762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.808256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.808267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.808808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.808845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.809325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.809338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.809791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.809828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.810223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.810235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.810796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.810832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.811322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.811333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.811774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.811785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.812227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.812236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.280 [2024-06-09 09:14:05.812682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.280 [2024-06-09 09:14:05.812719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.280 qpair failed and we were unable to recover it. 00:35:43.281 [2024-06-09 09:14:05.813081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.281 [2024-06-09 09:14:05.813092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.281 qpair failed and we were unable to recover it. 00:35:43.281 [2024-06-09 09:14:05.813434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.281 [2024-06-09 09:14:05.813454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.281 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.813928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.813939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.814380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.814389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.814831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.814841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.815304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.815314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.815660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.815671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.816021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.816030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.816470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.816480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.816917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.816926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.817362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.817371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.817810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.817820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.818270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.818284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.818755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.818765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.819146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.819155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.819600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.550 [2024-06-09 09:14:05.819610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.550 qpair failed and we were unable to recover it. 00:35:43.550 [2024-06-09 09:14:05.819913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.819922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.820277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.820287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.820642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.820651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.821146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.821155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.821574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.821584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.822017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.822026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.822461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.822470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.822948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.822957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.823318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.823327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.823672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.823681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.824139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.824149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.824627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.824638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.825094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.825104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.825563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.825572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.826005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.826014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.826489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.826498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.826931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.826940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.827374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.827383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.827861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.827871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.828380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.828389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.828872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.828882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.829340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.829350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.829896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.829933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.830414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.830426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.830868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.830878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.831314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.831324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.831857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.831894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.832381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.832393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.832814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.832851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.833219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.833232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.833781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.833817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.834309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.834321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.834676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.834686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.835144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.835153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.835618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.835655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.836138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.836149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.836679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.836715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.837203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.837215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.551 qpair failed and we were unable to recover it. 00:35:43.551 [2024-06-09 09:14:05.837740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.551 [2024-06-09 09:14:05.837778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.838260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.838272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.838824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.838860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.839222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.839233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.839794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.839830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.840312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.840324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.840880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.840917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.841413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.841426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.841895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.841905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.842349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.842358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.842887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.842924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.843575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.843611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.844099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.844111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.844678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.844715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.844984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.845000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.845463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.845475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.845908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.845917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.846394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.846418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.846881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.846890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.847325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.847334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.847770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.847780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.848217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.848227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.848766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.848802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.849283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.849295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.849807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.849818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.850257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.850266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.850788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.850832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.851290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.851302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.851770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.851781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.852253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.852263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.852809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.852846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.853339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.853351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.853883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.853919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.854364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.854376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.854922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.854958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.855561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.855597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.856094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.856106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.856686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.856722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.857214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.552 [2024-06-09 09:14:05.857227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.552 qpair failed and we were unable to recover it. 00:35:43.552 [2024-06-09 09:14:05.857649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.857685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.858177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.858189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.858716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.858753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.859213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.859224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.859752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.859789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.860158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.860170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.860739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.860775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.861270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.861282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.861743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.861753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.862188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.862198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.862727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.862764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.863257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.863270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.863821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.863857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.864205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.864218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.864820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.864861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.865347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.865359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.865901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.865938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.866610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.866647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.867144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.867156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.867694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.867731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.868227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.868239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.868777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.868814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.869297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.869310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.869786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.869797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.870233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.870243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.870798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.870836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.871317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.871330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.871792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.871803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.872277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.872287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.872785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.872797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.873235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.873244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.873761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.873798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.874290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.874301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.874762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.874772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.875282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.875291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.875732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.875742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.876207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.876217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.876754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.553 [2024-06-09 09:14:05.876791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.553 qpair failed and we were unable to recover it. 00:35:43.553 [2024-06-09 09:14:05.877269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.877281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.877766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.877777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.878042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.878052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.878499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.878513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.878947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.878956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.879389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.879398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.879832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.879842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.880275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.880285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.880733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.880743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.881178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.881187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.881798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.881835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.882292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.882304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.882672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.882683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.883177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.883187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.883721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.883758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.884249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.884261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.884808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.884845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.885218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.885230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.885754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.885792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.886281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.886293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.886807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.886818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.887253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.887263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.887790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.887827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.888319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.888331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.888932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.888969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.889591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.889628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.890095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.890108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.890669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.890705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.891086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.891098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.891579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.891590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.891910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.891920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.892418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.892429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.892799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.892808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.893165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.893174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.893636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.893646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.894081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.894090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.894326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.894342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.894800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.894811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.554 [2024-06-09 09:14:05.895269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.554 [2024-06-09 09:14:05.895278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.554 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.895801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.895837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.896321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.896333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.896797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.896808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.897243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.897252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.897859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.897896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.898383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.898395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.898921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.898957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.899609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.899645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.900137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.900148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.900684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.900720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.901208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.901221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.901656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.901693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.902160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.902172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.902703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.902740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.903233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.903245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.903792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.903829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.904324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.904336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.904900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.904937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.905574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.905610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.905957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.905971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.906331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.906341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.906783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.906793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.907228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.907237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.907782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.907818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.908309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.908321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.908774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.908785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.909219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.909229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.909796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.909833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.910331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.910343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.910868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.910904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.911392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.911412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.555 [2024-06-09 09:14:05.911854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.555 [2024-06-09 09:14:05.911864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.555 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.912338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.912352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.912771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.912806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.913262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.913274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.913807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.913843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.914330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.914342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.914914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.914951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.915605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.915642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.916140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.916152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.916707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.916743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.917127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.917138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.917692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.917728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.918169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.918181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.918725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.918762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.919238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.919249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.919696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.919733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.920222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.920233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.920767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.920805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.921305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.921318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.921618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.921628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.922105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.922115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.922579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.922616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.923102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.923114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.923547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.923558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.923992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.924002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.924443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.924453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.924896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.924906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.925363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.925373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.925821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.925836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.926298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.926308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.926763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.926772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.927280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.927289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.927725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.927735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.928169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.928178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.928718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.928754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.929287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.929299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.929814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.929824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.930160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.930170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.556 [2024-06-09 09:14:05.930729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.556 [2024-06-09 09:14:05.930765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.556 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.931251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.931263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.931745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.931781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.932275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.932286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.932737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.932748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.933188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.933198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.933751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.933788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.934048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.934060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.934515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.934526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.934986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.934996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.935460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.935470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.935903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.935912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.936349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.936358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.936853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.936863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.937309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.937318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.937776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.937786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.938125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.938135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.938594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.938604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.938959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.938969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.939318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.939327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.939784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.939794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.940303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.940313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.940780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.940790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.941225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.941234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.941805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.941842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.942328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.942340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.942791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.942802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.943234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.943244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.943680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.943718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.944175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.944186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.944769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.944806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.945337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.945350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.945921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.945958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.946558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.946594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.947079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.947091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.947619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.947655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.948135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.948147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.948720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.948757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.949250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.557 [2024-06-09 09:14:05.949262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.557 qpair failed and we were unable to recover it. 00:35:43.557 [2024-06-09 09:14:05.949880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.949917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.950416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.950429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.950895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.950904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.951346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.951355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.951902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.951939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.952431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.952454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.952945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.952955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.953392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.953406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.953837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.953847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.954354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.954363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.954843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.954879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.955370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.955382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.955912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.955949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.956588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.956624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.957120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.957132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.957680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.957717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.958181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.958193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.958720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.958757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.959247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.959258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.959786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.959827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.960319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.960331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.960834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.960845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.961080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.961096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.961646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.961683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.962156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.962170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.962734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.962770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.963263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.963274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.963712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.963748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.964242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.964254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.964801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.964838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.965334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.965346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.965913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.965949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.966554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.966590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.967080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.967092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.967581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.967617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.968108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.968120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.968451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.968464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.968921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.558 [2024-06-09 09:14:05.968930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.558 qpair failed and we were unable to recover it. 00:35:43.558 [2024-06-09 09:14:05.969370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.969379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.969813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.969823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.970281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.970290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.970760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.970770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.971233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.971243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.971793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.971829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.972317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.972329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.972856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.972867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.973300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.973313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.973906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.973942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.974576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.974613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.975108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.975120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.975647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.975684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.976173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.976186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.976760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.976796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.977293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.977304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.977683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.977694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.978133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.978142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.978679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.978716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.979200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.979211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.979720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.979757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.980248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.980260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.980831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.980868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.981359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.981371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.981789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.981825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.982291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.982304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.982771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.982782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.983265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.983275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.983692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.983730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.984099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.559 [2024-06-09 09:14:05.984113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.559 qpair failed and we were unable to recover it. 00:35:43.559 [2024-06-09 09:14:05.984637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.984674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.985165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.985177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.985767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.985803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.986297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.986308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.986779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.986789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.987228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.987242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.987790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.987827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.988283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.988295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.988828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.988839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.989282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.989291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.989726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.989736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.990277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.990287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.990759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.990769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.991205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.991214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.991741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.991778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.992268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.992280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.992725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.992736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.993217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.993227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.993779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.993816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.994307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.994319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.994773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.994784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.995219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.995228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.995759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.995795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.996269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.996281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.996743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.996754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.997198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.997207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.997731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.997768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.998259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.998271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.998694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.998730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.999205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.999217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:05.999680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:05.999717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:06.000212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:06.000225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:06.000776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:06.000814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:06.001266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:06.001278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:06.001825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:06.001862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:06.002389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:06.002447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:06.002907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:06.002917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:06.003399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:06.003411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.560 [2024-06-09 09:14:06.004018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.560 [2024-06-09 09:14:06.004055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.560 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.004647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.004683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.005173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.005185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.005713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.005749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.006240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.006252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.006788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.006826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.007282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.007295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.007823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.007859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.008348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.008360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.008709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.008720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.009175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.009184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.009704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.009740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.010230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.010243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.010797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.010834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.011321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.011333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.011703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.011739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.012202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.012215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.012674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.012710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.013173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.013186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.013709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.013746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.014237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.014249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.014778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.014815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.015356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.015369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.015915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.015951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.016610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.016647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.017137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.017150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.017748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.017786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.018302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.018314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.018678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.018689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.019205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.019215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.019764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.019802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.020287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.020299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.020754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.020765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.021133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.021142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.021682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.021718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.022233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.022249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.022781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.561 [2024-06-09 09:14:06.022817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.561 qpair failed and we were unable to recover it. 00:35:43.561 [2024-06-09 09:14:06.023180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.023191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.023754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.023790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.024272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.024283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.024724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.024735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.025176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.025186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.025738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.025774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.026273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.026285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.026735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.026747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.027210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.027221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.027843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.027880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.028612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.028648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.029139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.029151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.029683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.029719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.030211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.030223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.030761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.030798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.031297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.031308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.031774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.031785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.032226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.032235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.032808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.032845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.033310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.033322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.033893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.033929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.034418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.034431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.034871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.034881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.035315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.035325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.035792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.035802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.036155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.036169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.036417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.036435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.036909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.036919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.037353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.037362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.037842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.037879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.038365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.038378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.038957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.038995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.039581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.039617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.040110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.040122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.040613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.040656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.041148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.041160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.041661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.041697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.042187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.042199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.562 qpair failed and we were unable to recover it. 00:35:43.562 [2024-06-09 09:14:06.042770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.562 [2024-06-09 09:14:06.042807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.043296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.043308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.043738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.043750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.044205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.044215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.044752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.044789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.045282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.045294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.045733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.045744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.046179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.046189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.046716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.046752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.047183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.047195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.047767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.047804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.048266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.048277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.048811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.048848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.049337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.049349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.049812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.049848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.050219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.050232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.050789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.050825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.051317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.051329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.051819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.051830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.052289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.052298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.052821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.052831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.053281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.053290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.053730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.053740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.054222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.054231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.054771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.054808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.055296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.055307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.055664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.055675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.056138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.056148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.056721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.056759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.057215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.057226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.057792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.057828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.058320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.058332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.058784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.058794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.059285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.059295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.059732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.059741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.060106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.060115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.060670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.060706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.061015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.061029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f270 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.563 [2024-06-09 09:14:06.061061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179ce30 (9): Bad file descriptor 00:35:43.563 [2024-06-09 09:14:06.061854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.563 [2024-06-09 09:14:06.061941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.563 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.062643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.062729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.063300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.063334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.063943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.064030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.064502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.064545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.064969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.064998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.065604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.065691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.066274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.066308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.066871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.066957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.067624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.067711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.068302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.068336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.068722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.068752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.069226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.069254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.069739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.069768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.070270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.070297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.070826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.070855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.071339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.071377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.071887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.071916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.072388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.072428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.072951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.072978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.073580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.073667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.074242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.074277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.074807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.074839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.075313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.075341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.075844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.075874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.076357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.076384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.076899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.076928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.077313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.077340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.077923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.078009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.078680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.078767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.079339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.079373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.079890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.564 [2024-06-09 09:14:06.079920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.564 qpair failed and we were unable to recover it. 00:35:43.564 [2024-06-09 09:14:06.080435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.080466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.080845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.080877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.081355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.081384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.081899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.081927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.082629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.082715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.083267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.083301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.083676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.083715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.084228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.084257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.084741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.084770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.085116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.085144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.085610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.085639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.086007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.086036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.086540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.086570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.087069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.087096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.087617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.087645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.088142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.088169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.088665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.088693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.089082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.089109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.089590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.089620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.090114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.090141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.090630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.090658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.091127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.091154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.091729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.091817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.092425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.092461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.092972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.093000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.093583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.093669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.094308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.094343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.094854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.094886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.095354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.095382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.095904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.095933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.096430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.096460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.096850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.096878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.097357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.097384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.097950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.098039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.098695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.098782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.565 qpair failed and we were unable to recover it. 00:35:43.565 [2024-06-09 09:14:06.099244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.565 [2024-06-09 09:14:06.099278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.566 qpair failed and we were unable to recover it. 00:35:43.566 [2024-06-09 09:14:06.099687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.566 [2024-06-09 09:14:06.099717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.566 qpair failed and we were unable to recover it. 00:35:43.566 [2024-06-09 09:14:06.100063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.566 [2024-06-09 09:14:06.100091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.566 qpair failed and we were unable to recover it. 00:35:43.835 [2024-06-09 09:14:06.100598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.100630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.101117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.101146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.101635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.101663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.102150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.102177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.102760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.102848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.103424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.103460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.103957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.103985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.104389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.104445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.104939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.104966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.105367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.105419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.105921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.105949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.106621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.106707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.107138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.107173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.107803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.107901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.108363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.108397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.108919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.108949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2846673 Killed "${NVMF_APP[@]}" "$@" 00:35:43.836 [2024-06-09 09:14:06.109457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.109499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:43.836 [2024-06-09 09:14:06.109900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.109928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:43.836 [2024-06-09 09:14:06.110436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:43.836 [2024-06-09 09:14:06.110465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:43.836 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.836 [2024-06-09 09:14:06.110975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.111003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.111502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.111530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.112031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.112059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.112542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.112571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.112964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.112991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.113472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.113502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.113988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.114018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.114298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.114330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.114841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.114871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.115385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.115424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.836 [2024-06-09 09:14:06.115713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.836 [2024-06-09 09:14:06.115741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.836 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.116244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.116272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.116740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.116768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.117262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.117290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.117782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.117810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.118291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.118319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2847704 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2847704 00:35:43.837 [2024-06-09 09:14:06.118792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.118822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2847704 ']' 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.837 [2024-06-09 09:14:06.119332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.119360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.837 [2024-06-09 09:14:06.119749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.837 [2024-06-09 09:14:06.119778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:43.837 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.837 [2024-06-09 09:14:06.120327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.120355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.120855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.120887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.121354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.121381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.121937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.121965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.122484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.122514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.122991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.123018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.123597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.123685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.124140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.124174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.124684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.124728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.125239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.125267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.125759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.125790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.126303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.126332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.126820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.126849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.127328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.127355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.127916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.127944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.128605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.128692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.129250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.129285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.129776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.129806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.130200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.130228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.130422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.130451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.837 [2024-06-09 09:14:06.130965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.837 [2024-06-09 09:14:06.130993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.837 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.131601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.131688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.132282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.132318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.132841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.132873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.133372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.133400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.133924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.133952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.134475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.134518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.135014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.135042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.135640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.135727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.136309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.136344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.136848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.136879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.137383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.137420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.137902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.137929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.138322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.138363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.138884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.138918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.139466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.139510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.139899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.139927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.140458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.140487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.140827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.140855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.141393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.141445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.141939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.141968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.142453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.142482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.142986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.143013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.143391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.143429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.143910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.143938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.144319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.144346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.144844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.144872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.145282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.145313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.145819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.145864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.146339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.146368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.146890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.146919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.147284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.147311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.147569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.147599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.148120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.838 [2024-06-09 09:14:06.148148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.838 qpair failed and we were unable to recover it. 00:35:43.838 [2024-06-09 09:14:06.148639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.148667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.149059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.149095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.149581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.149611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.149862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.149889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.150396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.150434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.150945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.150973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.151362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.151388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.151826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.151853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.152278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.152306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.152713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.152745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.153255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.153283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.153834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.153863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.154353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.154380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.154918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.154946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.155436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.155469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.155956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.155983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.156491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.156519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.157074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.157101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.157580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.157609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.158053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.158080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.158591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.158619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.159120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.159147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.159555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.159594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.160102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.160131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.160722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.160810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.161420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.161457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.162020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.162048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.162630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.162718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.163318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.163352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.163776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.163807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.164271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.164299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.164839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.164869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.165381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.165434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.166002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.839 [2024-06-09 09:14:06.166030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.839 qpair failed and we were unable to recover it. 00:35:43.839 [2024-06-09 09:14:06.166620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.166719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.167096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.167131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.167715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.167803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.168240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.168276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.168558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.168588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.168739] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:43.840 [2024-06-09 09:14:06.168791] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:43.840 [2024-06-09 09:14:06.169078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.169106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.169629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.169657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.170167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.170195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.170684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.170713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.171206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.171234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.171850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.171938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.172425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.172463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.172964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.172993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.173678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.173766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.174342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.174376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.174908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.174938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.175421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.175451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.175957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.175985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.176274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.176301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.176896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.176985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.177660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.177751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.178346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.178381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.178926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.178956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.179629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.179719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.180322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.180357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.180887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.180918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.181449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.181481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.181861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.181894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.182143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.182172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.182422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.182452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.182921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.182949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.183593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.183682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.184254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.184288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.184855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.184885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.840 [2024-06-09 09:14:06.185412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.840 [2024-06-09 09:14:06.185442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.840 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.185833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.185874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.186157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.186186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.186754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.186847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.187297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.187333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.187736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.187777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.188242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.188270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.188579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.188609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.189031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.189073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.189534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.189564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.190063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.190091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.190496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.190529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.191032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.191060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.191563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.191593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.191971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.191998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.192474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.192502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.192988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.193016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.193520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.193548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.194046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.194074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.194584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.194613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.195108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.195135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.195644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.195672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.196172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.196199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.196699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.196727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.197215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.197243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.197817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.197905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.198596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.198684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.199242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.199276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.199808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.841 [2024-06-09 09:14:06.199840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.841 qpair failed and we were unable to recover it. 00:35:43.841 [2024-06-09 09:14:06.200325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.200354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.200719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.200749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.842 [2024-06-09 09:14:06.201214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.201242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.201745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.201776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.202266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.202294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.202788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.202816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.203310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.203337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.203821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.203849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.204341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.204369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.204897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.204927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.205441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.205472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.205749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.205776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.206258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.206286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.206650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.206679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.207256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.207283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.207825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.207853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.208360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.208388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.208892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.208920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.209462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.209504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.210034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.210061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.210569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.210597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.210993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.211020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.211422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.211449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.211934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.211962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.212607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.212695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.213289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.213325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.213807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.213838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.214331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.214358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.214754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.214783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.215286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.215325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.215851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.215881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.216371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.216399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.216887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.216915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.217400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.217444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.217953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.217981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.218599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.842 [2024-06-09 09:14:06.218688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.842 qpair failed and we were unable to recover it. 00:35:43.842 [2024-06-09 09:14:06.219247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.219282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.219776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.219806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.220308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.220335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.220777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.220806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.221199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.221227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.221820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.221908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.222369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.222419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.222817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.222846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.223329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.223356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.223863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.223893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.224644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.224732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.225068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.225101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.225621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.225652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.226048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.226077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.226609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.226641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.227133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.227163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.227662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.227691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.228059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.228087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.228570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.228598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.229086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.229113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.229622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.229653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.230144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.230171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.230764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.230853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.231436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.231472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.231974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.232002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.232504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.232534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.233022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.233050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.233545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.233574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.234074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.234101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.234606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.234635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.235120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.235147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.235661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.235690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.235956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.235982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.236459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.236499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.236998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.237026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.843 [2024-06-09 09:14:06.237615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.843 [2024-06-09 09:14:06.237643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.843 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.238143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.238170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.238661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.238689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.239199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.239226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.239807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.239897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.240491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.240529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.241047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.241075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.241570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.241599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.242085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.242112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.242609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.242638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.243144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.243171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.243751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.243839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.244430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.244468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.244960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.244988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.245492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.245523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.246007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.246034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.246540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.246568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.246918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.246945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.247350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.247391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.247690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.247719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.247989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.248015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.248513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.248543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.249049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.249076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.249605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.249633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.249906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.249933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.250457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.250487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.250874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.250908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.251305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.251330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.251825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.251853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.252420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.252449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.252964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.252992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.253389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.253434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.253985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.254014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.254616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.254706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.255280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.255315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.255843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.255874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.256378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.256419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.844 [2024-06-09 09:14:06.256906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.844 [2024-06-09 09:14:06.256933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.844 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.257433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.257474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.257951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.257980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.258270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.258297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.258878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.258968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 [2024-06-09 09:14:06.258970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.259421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.259458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.259981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.260009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.260591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.260684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.261241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.261274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.261832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.261862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.262362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.262390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.262928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.262956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.263656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.263747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.264384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.264453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.264960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.264988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.265644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.265733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.266203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.266238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.266725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.266757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.267124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.267151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.267541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.267570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.268057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.268085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.268610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.268638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.269143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.269170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.269673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.269702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.270223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.270251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.270740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.270768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.271271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.271299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.271702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.271742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.272145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.272181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.272672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.272701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.273199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.273226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.273532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.273563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.845 qpair failed and we were unable to recover it. 00:35:43.845 [2024-06-09 09:14:06.273899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.845 [2024-06-09 09:14:06.273927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.274305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.274337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.274861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.274890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.275394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.275433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.275843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.275871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.276338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.276366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.276856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.276885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.277368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.277395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.277754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.277782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.278334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.278369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.278916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.278945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.279499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.279528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.279909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.279936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.280437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.280465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.280976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.281003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.281507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.281538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.282044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.282071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.282553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.282582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.283070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.283097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.283571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.283599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.284082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.284110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.284617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.284645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.285131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.285158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.285648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.285677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.286193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.286220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.286807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.286898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.287608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.287701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.288270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.288305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.288700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.288731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.289259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.289286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.289686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.289729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.290043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.290071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.290493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.290531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.291028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.846 [2024-06-09 09:14:06.291056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.846 qpair failed and we were unable to recover it. 00:35:43.846 [2024-06-09 09:14:06.291452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.291481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.292019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.292047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.292546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.292576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.293105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.293132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.293624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.293653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.294160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.294187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.294679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.294708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.295191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.295219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.295826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.295916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.296398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.296449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.296830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.296859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.297242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.297270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.297784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.297815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.298216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.298257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.298643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.298677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.299182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.299222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.299718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.299747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.300253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.300281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.300781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.300809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.301189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.301216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.301710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.301738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.302225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.302252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.302735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.302763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.303254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.303283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.303777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.303804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.304315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.304342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.304840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.304869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.305357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.305384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.305905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.305934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.306600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.306690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.307140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.307175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.307442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.307474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.307984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.308014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.308657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.308750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.309315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.309350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.309883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.309913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.310400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.847 [2024-06-09 09:14:06.310439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.847 qpair failed and we were unable to recover it. 00:35:43.847 [2024-06-09 09:14:06.310935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.310963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.311600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.311690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.312242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.312276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.312820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.312850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.313355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.313383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.313907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.313938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.314428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.314459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.314896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.314923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.315423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.315452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.316002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.316029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.316621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.316715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.317309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.317343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.317873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.317904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.318414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.318444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.318921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.318949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.319329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.319357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.319987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.320076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.320639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.320732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.321272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.321318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.321827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.321858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.322351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.322379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.322864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.322894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.323396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.323435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.323947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.323974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.324292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.324319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.324894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.324983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.325611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.325701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.326301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.326335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.326815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.326846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.327297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.327327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.327891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.327920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.328317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.328344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.328861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.328891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.329376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.329425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.329933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.329961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.848 qpair failed and we were unable to recover it. 00:35:43.848 [2024-06-09 09:14:06.330061] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:43.848 [2024-06-09 09:14:06.330092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:43.848 [2024-06-09 09:14:06.330100] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:43.848 [2024-06-09 09:14:06.330107] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:43.848 [2024-06-09 09:14:06.330113] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:43.848 [2024-06-09 09:14:06.330296] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:35:43.848 [2024-06-09 09:14:06.330460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:35:43.848 [2024-06-09 09:14:06.330688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.848 [2024-06-09 09:14:06.330781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.330804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:35:43.849 [2024-06-09 09:14:06.330806] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:35:43.849 [2024-06-09 09:14:06.331337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.331372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.331809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.331840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.332353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.332381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.332898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.332927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.333313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.333341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.333874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.333905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.334421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.334451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.334956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.334984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.335594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.335684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.336283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.336317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.336891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.336981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.337602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.337694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.338140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.338175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.338655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.338685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.339179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.339207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.339790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.339883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.340643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.340733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.341281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.341316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.341843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.341876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.342390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.342433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.342959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.342987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.343603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.343694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.344289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.344322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.344738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.344769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.345267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.345295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.345731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.345759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.346251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.346279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.346775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.346804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.347310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.347337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.347713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.347742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.348251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.348279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.348642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.348670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.349064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.349115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.349611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.349642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.350142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.350170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.350665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.350694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.351097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.849 [2024-06-09 09:14:06.351124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.849 qpair failed and we were unable to recover it. 00:35:43.849 [2024-06-09 09:14:06.351607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.351636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.352019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.352046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.352536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.352565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.353079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.353106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.353609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.353638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.354005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.354032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.354521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.354549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.355064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.355091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.355478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.355507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.356070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.356098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.356591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.356619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.357012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.357039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.357514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.357543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.358068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.358095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.358615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.358645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.359040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.359067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.359465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.359493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.359984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.360011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.360522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.360550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.361073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.361100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.361610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.361638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.361998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.362025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.362529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.362559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.363055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.363082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.363377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.363416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.363970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.363997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.364634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.364726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.365324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.365359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.365945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.365977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.366606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.366698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.367289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.367325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.850 [2024-06-09 09:14:06.367887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.850 [2024-06-09 09:14:06.367918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.850 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.368400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.368442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.368961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.368989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.369624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.369715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.370311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.370357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.370876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.370907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.371422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.371452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.371963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.371990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.372627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.372717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.373321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.373356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.373888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.373919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.374432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.374462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.374873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.374900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.375312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.375341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.375918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.376011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.376608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.376700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.376920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.376954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.377460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.377490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.377939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.377968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.378458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.378487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.378818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.378849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.379107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.379133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.379340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.379367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.379676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.379706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.380188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.380215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.380531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.380560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.381070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.381098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.381519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.381562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.381851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.381880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.382430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.382460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.383035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.383063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.383565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.383595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.384112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.384139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:43.851 [2024-06-09 09:14:06.384746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.851 [2024-06-09 09:14:06.384774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:43.851 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.385288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.385317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.385825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.385855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.386354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.386382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.386891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.386921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.387460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.387504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.388085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.388112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.388660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.388751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.389352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.389387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.389972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.390002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.390598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.390689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.391238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.391284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.391851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.391883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.392142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.392170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.392690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.392718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.393119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.393152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.393662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.393692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.394059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.394087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.394580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.126 [2024-06-09 09:14:06.394608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.126 qpair failed and we were unable to recover it. 00:35:44.126 [2024-06-09 09:14:06.395127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.395154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.395650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.395678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.396179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.396206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.396660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.396754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.397393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.397457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.398000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.398029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.398649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.398743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.399143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.399178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.399589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.399636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.400120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.400148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.400433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.400460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.400972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.400999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.401370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.401398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.401917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.401946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.402596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.402687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.403290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.403325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.403813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.403843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.404325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.404353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.404855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.404885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.405391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.405431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.405952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.405980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.406602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.406693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.407283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.407317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.407717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.407748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.408250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.408279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.408478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.408510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.409029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.409057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.409442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.409475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.409987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.410014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.410628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.410719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.127 [2024-06-09 09:14:06.411074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.127 [2024-06-09 09:14:06.411108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.127 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.411608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.411638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.412162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.412201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.412743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.412774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.413135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.413163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.413753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.413846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.414445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.414484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.415001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.415030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.415296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.415325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.415811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.415840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.416349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.416377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.416958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.416987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.417621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.417712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.418305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.418341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.418748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.418794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.419211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.419244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.419742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.419774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.420281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.420309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.420821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.420850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.421360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.421388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.421849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.421877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.422358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.422386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.422785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.422821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.423313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.423342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.423701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.423732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.424118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.424153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.424577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.424606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.424890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.424916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.425420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.425450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.425967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.426003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.426611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.426703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.427178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.427212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.427736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.427765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.428261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.428289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.428795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.428824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.128 qpair failed and we were unable to recover it. 00:35:44.128 [2024-06-09 09:14:06.429325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.128 [2024-06-09 09:14:06.429352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.429888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.429918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.430431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.430462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.431045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.431072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.431565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.431595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.432076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.432104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.432615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.432644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.433143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.433171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.433684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.433777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.434380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.434433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.434697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.434725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.435230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.435257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.435737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.435766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.436270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.436298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.436793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.436821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.437019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.437046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.437462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.437495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.438027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.438055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.438566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.438594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.439098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.439127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.439517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.439547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.440073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.440101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.440594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.440622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.440993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.441020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.441580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.441610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.442142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.442169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.442679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.442707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.443185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.443212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.443801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.443893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.444643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.444737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.445344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.129 [2024-06-09 09:14:06.445379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.129 qpair failed and we were unable to recover it. 00:35:44.129 [2024-06-09 09:14:06.445917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.445947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.446642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.446734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.447107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.447142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.447714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.447757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.448242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.448271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.448762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.448791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.449055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.449082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.449479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.449508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.450003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.450030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.450392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.450433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.450872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.450900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.451412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.451441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.451962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.451989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.452589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.452685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.453290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.453324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.453879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.453911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.454436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.454468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.454744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.454772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.455294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.455322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.455827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.455857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.456367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.456394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.456971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.457000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.457613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.457706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.458311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.458346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.458896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.458926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.459462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.459508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.459932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.459960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.460464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.460492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.460993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.461021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.461580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.461609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.462099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.462128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.462398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.462437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.462822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.462849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.463346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.463373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.130 [2024-06-09 09:14:06.463870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.130 [2024-06-09 09:14:06.463901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.130 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.464397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.464436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.464954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.464981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.465601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.465694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.466266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.466302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.466695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.466725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.467222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.467251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.467512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.467541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.467955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.467981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.468477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.468517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.468800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.468827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.469340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.469367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.469889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.469919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.470322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.470348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.470967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.470996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.471415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.471443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.471852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.471897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.472601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.472696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.473071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.473107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.473394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.473456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.474003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.474031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.474318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.474345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.474860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.474890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.475418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.475448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.475962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.475990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.476350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.476377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.477035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.131 [2024-06-09 09:14:06.477130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.131 qpair failed and we were unable to recover it. 00:35:44.131 [2024-06-09 09:14:06.477698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.477793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.478235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.478270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.478783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.478814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.479096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.479126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.479649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.479678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.480218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.480245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.480742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.480771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.481261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.481289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.481825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.481855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.482238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.482267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.482825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.482855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.483352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.483379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.483893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.483922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.484460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.484506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.485010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.485037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.485563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.485591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.486095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.486122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.486617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.486645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.487161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.487189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.487780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.487876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.488458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.488496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.489012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.489041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.489552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.489593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.490109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.490137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.490655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.490685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.491185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.491212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.491626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.491723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.492067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.492111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.492359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.492389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.492874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.492903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.493442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.493472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.493986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.132 [2024-06-09 09:14:06.494015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.132 qpair failed and we were unable to recover it. 00:35:44.132 [2024-06-09 09:14:06.494513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.494542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.495068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.495095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.495609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.495638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.496134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.496163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.496757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.496853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.497237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.497272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.497559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.497590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.498116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.498145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.498760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.498790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.499306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.499333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.499834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.499863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.500379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.500417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.500951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.500979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.501607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.501704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.502315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.502349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.502931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.502962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.503363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.503416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.504001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.504030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.504651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.504747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.505093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.505128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.505443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.505473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.505768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.505796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.506321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.506349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.506852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.506882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.507394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.507434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.507861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.507888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.508282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.508309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.508726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.508763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.509159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.509186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.509680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.509709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.510229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.510268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.510731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.510761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.511260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.511287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.511713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.511741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.512268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.133 [2024-06-09 09:14:06.512295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.133 qpair failed and we were unable to recover it. 00:35:44.133 [2024-06-09 09:14:06.512800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.512828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.513320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.513346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.513855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.513884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.514155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.514183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.514708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.514736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.515003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.515030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.515460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.515487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.515899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.515926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.516441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.516470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.516962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.516990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.517498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.517527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.518052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.518079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.518602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.518630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.519136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.519163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.519669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.519697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.520227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.520254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.520535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.520564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.521138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.521165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.521443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.521470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.521879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.521906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.522429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.522457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.522960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.522987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.523252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.523279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.523816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.523845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.524373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.524412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.524916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.524943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.525345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.525371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.525813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.525843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.526364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.526391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.526663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.526691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.526963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.526989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.527511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.527540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.528074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.528101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.528375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.528409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.134 qpair failed and we were unable to recover it. 00:35:44.134 [2024-06-09 09:14:06.528946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.134 [2024-06-09 09:14:06.528974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.529490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.529525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.530053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.530080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.530482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.530510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.531021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.531048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.531545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.531573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.532084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.532111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.532476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.532503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.532899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.532930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.533459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.533487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.533995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.534022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.534484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.534512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.535014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.535041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.535572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.535600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.535971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.535997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.536509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.536537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.537033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.537061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.537353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.537380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.537680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.537707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.538234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.538262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.538567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.538595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.539132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.539159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.539418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.539446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.539972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.540000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.540688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.540786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.541391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.541459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.541982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.542011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.542610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.542708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.543312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.543348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.543932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.543963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.544666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.544767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.545381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.545434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.545988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.546017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.546638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.546737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.547081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.547116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.547519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.547550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.135 qpair failed and we were unable to recover it. 00:35:44.135 [2024-06-09 09:14:06.547696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.135 [2024-06-09 09:14:06.547725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.548171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.548217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.548719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.548749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.549091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.549119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.549671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.549700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.550207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.550247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.550772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.550803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.551325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.551352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.551862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.551891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.552428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.552457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.552998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.553025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.553672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.553769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.554349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.554384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.554931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.554962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.555249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.555276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.555823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.555852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.556141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.556170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.556685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.556713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.557215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.557243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.557757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.557787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.558293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.558321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.558829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.558858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.559364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.559391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.559899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.559927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.560471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.560546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.561127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.561154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.561754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.561853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.562218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.562252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.562801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.562831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.563245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.563281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.136 qpair failed and we were unable to recover it. 00:35:44.136 [2024-06-09 09:14:06.563790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.136 [2024-06-09 09:14:06.563819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.564117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.564147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.564595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.564625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.564864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.564891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.565422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.565452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.565884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.565929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.566446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.566476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.567017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.567044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.567345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.567372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.567952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.567980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.568510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.568539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.568964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.568991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.569257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.569283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.569803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.569831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.570356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.570383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.570898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.570936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.571197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.571223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.571757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.571786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.572054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.572083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.572585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.572613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.573128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.573155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.573676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.573704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.574201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.574227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.574869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.574971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.575427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.575464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.575875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.575905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.576296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.576324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.576827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.576858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.577382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.577420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.577996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.578024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.578655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.578758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.579381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.579433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.579936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.579964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.580652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.580753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.581198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.581233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.137 qpair failed and we were unable to recover it. 00:35:44.137 [2024-06-09 09:14:06.581898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.137 [2024-06-09 09:14:06.582000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.582683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.582783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.583152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.583188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.583707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.583738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.584245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.584274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.584798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.584828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.585263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.585291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.585860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.585891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.586421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.586450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.586881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.586929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.587465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.587517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.587815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.587842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.588364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.588392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.588680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.588707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.589130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.589158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.589566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.589595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.590025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.590052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.590620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.590649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.590922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.590949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.591350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.591381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.591898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.591936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.592336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.592364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.592946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.592975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.593506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.593536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.594067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.594094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.594387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.594424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.594987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.595014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.595644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.595746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.596200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.596234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.596874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.596976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.597737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.597839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.598197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.598231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.598793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.598824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.599338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.599366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.138 [2024-06-09 09:14:06.599913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.138 [2024-06-09 09:14:06.599943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.138 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.600619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.600721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.601348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.601382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.601934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.601964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.602614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.602716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.603327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.603362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.603903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.603935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.604653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.604754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.605256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.605291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.605702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.605733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.606157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.606185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.606708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.606737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.607145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.607172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.607771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.607873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.608498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.608534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.609089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.609119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.609645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.609675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.610259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.610286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.610829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.610859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.611372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.611399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.611681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.611708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.611857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.611885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.612173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.612219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.612517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.612548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.613074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.613102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.613397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.613444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.613968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.614007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.614542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.614572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.615085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.615112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.615628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.615656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.616191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.616218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.616828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.616931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.617319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.617353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.617690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.617722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.618267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.618297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.618695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.618728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.139 [2024-06-09 09:14:06.619259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.139 [2024-06-09 09:14:06.619287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.139 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.619799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.619829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.620336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.620363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.620891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.620919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.621468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.621520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.621832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.621858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.622124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.622151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.622649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.622678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.622978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.623004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.623451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.623498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.624067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.624095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.624591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.624620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.625025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.625052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.625567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.625595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.626103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.626130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.626633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.626663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.626836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.626867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.627361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.627390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.627938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.627966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.628374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.628414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.628918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.628946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.629467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.629517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.630068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.630096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.630610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.630639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.631149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.631176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.631467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.631494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.631949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.631976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.632476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.632505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.632908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.632949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.633256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.633282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.633787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.633826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.634335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.634363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.634854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.634883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.635292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.635319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.635835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.635864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.140 qpair failed and we were unable to recover it. 00:35:44.140 [2024-06-09 09:14:06.636372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.140 [2024-06-09 09:14:06.636400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.636909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.636936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.637650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.637752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.638379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.638444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.639012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.639041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.639708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.639811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.640444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.640483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.640851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.640880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.641048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.641077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.641480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.641514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.642016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.642045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.642474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.642503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.643039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.643067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.643589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.643618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.644145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.644173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.644682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.644710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.644984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.645010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.645425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.645454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.645981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.646009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.646678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.646780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.647380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.647439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.647971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.648000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.648469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.648524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.649054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.649082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.649707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.649809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.650430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.650466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.650864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.650893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.651392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.651440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.651769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.141 [2024-06-09 09:14:06.651799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.141 qpair failed and we were unable to recover it. 00:35:44.141 [2024-06-09 09:14:06.652303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.652330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.652851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.652881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.653393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.653433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.653979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.654007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.654637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.654738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.655203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.655238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.655748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.655792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.656315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.656344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.656879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.656908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.657466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.657518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.657814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.657841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.658369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.658397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.658742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.658770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.659272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.659300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.659813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.659842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.660367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.660395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.660901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.660929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.661470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.661523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.662080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.662107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.662441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.662473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.663054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.663082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.663702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.663805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.664321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.664368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.664851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.664884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.665287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.665316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.665826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.665855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.666362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.666390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.666979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.667007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.667664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.667764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.668400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.668454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.668732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.668761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.669021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.669048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.669432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.669462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.670010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.142 [2024-06-09 09:14:06.670041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.142 qpair failed and we were unable to recover it. 00:35:44.142 [2024-06-09 09:14:06.670274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.670302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.670810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.670839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.671365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.671394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.672030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.672059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.672347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.672373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.672901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.672929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.673466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.673518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.674056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.674083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.674388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.674427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.143 [2024-06-09 09:14:06.674930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.143 [2024-06-09 09:14:06.674957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.143 qpair failed and we were unable to recover it. 00:35:44.412 [2024-06-09 09:14:06.675360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.412 [2024-06-09 09:14:06.675391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.675919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.675948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.676652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.676766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.677320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.677356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.677900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.677932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.678400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.678448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.678970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.678998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.679648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.679750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.680278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.680315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.680872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.680976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.681630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.681731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.682371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.682426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.682739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.682767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.683035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.683063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.683511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.683559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.683861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.683890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.684058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.684085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.684487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.684516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.684837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.684867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.685307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.685335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.685905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.685934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.686449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.686478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.686794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.686831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.687362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.687389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.687697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.687728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.688232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.688261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.688779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.688809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.689321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.689348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.689906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.689935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.690444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.690481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.691007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.691034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.691570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.691599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.692115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.692143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.692678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.692707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.693130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.693157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.693447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.693474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.693892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.693919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.694188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.413 [2024-06-09 09:14:06.694215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.413 qpair failed and we were unable to recover it. 00:35:44.413 [2024-06-09 09:14:06.694719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.694748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.695013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.695039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.695488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.695517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.696007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.696034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.696467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.696511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.697051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.697079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.697376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.697415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.697913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.697941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.698457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.698485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.699016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.699044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.699548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.699576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.700102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.700130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.700668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.700697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.701200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.701228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.701841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.701944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.702654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.702756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.703136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.703171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.703686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.703719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.704249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.704277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.704683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.704713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.705312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.705341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.705888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.705917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.706436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.706468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.706878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.706924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.707258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.707287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.707604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.707633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.708173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.708201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.708702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.708733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.709144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.709175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.709482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.709515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.710077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.710105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.710370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.710433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.710734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.710762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.711246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.711273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.711817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.711846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.712375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.712417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.712923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.712951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.713648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.414 [2024-06-09 09:14:06.713750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.414 qpair failed and we were unable to recover it. 00:35:44.414 [2024-06-09 09:14:06.714365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.714418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.714947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.714975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.715651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.715753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.716247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.716283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.716789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.716820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.717089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.717115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.717618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.717646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.717916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.717944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.718443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.718475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.718879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.718907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.719437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.719466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.719971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.719999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.720585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.720616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.721166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.721193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.721610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.721639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.722161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.722189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.722662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.722690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.723056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.723084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.723454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.723482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.723853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.723880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.724318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.724345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.724870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.724899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.725423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.725452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.725974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.726002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.726653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.726755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.727381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.727439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.727967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.727997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.728629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.728730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.729356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.729392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.729864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.729913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.730452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.730489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.730947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.730976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.731514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.731544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.732086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.732127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.732428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.732459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.732865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.732903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.733302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.733334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.733870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.733900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.415 qpair failed and we were unable to recover it. 00:35:44.415 [2024-06-09 09:14:06.734200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.415 [2024-06-09 09:14:06.734226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.734627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.734655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.735175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.735202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.735707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.735736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.736264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.736292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.736845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.736876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.737291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.737323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.737832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.737861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.738375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.738417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.738747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.738776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.739377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.739429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.739888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.739917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.740472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.740524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.741055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.741083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.741587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.741615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.742118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.742145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.742767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.742870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.743216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.743252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.743803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.743834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.744102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.744130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.744560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.744592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.745113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.745141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.745669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.745700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.746216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.746243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.746755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.746787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.747313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.747341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.747871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.747900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.748425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.748455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.749000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.749028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.749668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.749769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.750269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.750314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.750871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.750905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.751134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.751161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.751598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.751631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.751964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.751995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.752436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.752484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.752883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.752912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.753376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.753418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.416 qpair failed and we were unable to recover it. 00:35:44.416 [2024-06-09 09:14:06.753820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.416 [2024-06-09 09:14:06.753847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.754376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.754414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.754932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.754960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.755486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.755515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.756025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.756054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.756559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.756589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.757130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.757160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.757667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.757697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.758203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.758232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.758841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.758944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.759677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.759779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.760398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.760456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.760983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.761011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.761629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.761733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.762392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.762445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.762982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.763011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.763680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.763781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.764392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.764448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.764972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.765002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.765641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.765743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.766368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.766420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.766964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.766994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.767624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.767725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.768334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.768369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.768906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.768937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.769224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.769267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.769657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.769688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.770206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.770234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.770506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.770534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.771121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.771149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.771665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.417 [2024-06-09 09:14:06.771694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.417 qpair failed and we were unable to recover it. 00:35:44.417 [2024-06-09 09:14:06.772203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.772232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.772746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.772776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.773360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.773388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.773951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.773981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.774674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.774775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.775399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.775456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.775960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.776007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.776631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.776734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.777120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.777156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.777770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.777871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.778652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.778755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.779374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.779441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.780025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.780054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.780673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.780775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.781394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.781448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.781788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.781819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.782336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.782364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.782819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.782868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.783309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.783344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.783894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.783925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.784470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.784503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.785017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.785045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.785573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.785604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.786121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.786150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.786561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.786593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.787126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.787154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.787667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.787695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.788164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.788194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.788701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.788803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.789459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.789497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.790030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.790059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.790688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.790790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.791374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.791426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.791953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.791984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.792684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.792788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.793432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.793471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.793914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.793943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.794612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.794713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.418 qpair failed and we were unable to recover it. 00:35:44.418 [2024-06-09 09:14:06.795234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.418 [2024-06-09 09:14:06.795270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.795828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.795860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.796381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.796422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.796935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.796964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.797620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.797723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.798343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.798379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.798945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.798976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.799647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.799750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.800364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.800433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.800726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.800757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.801272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.801300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.801800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.801831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.802338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.802366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.802914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.802946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.803604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.803706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.803993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.804028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.804545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.804576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.805046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.805075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.805447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.805479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.806035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.806063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.806235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.806263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.806686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.806715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.807257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.807285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.807837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.807866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.808376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.808416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.808673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.808700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.809288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.809316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.809801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.809832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.810369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.810398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.810721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.810749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.811188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.811237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.811577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.811613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.812117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.812146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.812655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.812684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.813239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.813267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.813776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.813806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.814389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.814430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.814943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.814971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.419 [2024-06-09 09:14:06.815609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.419 [2024-06-09 09:14:06.815711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.419 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.816296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.816331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.816722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.816753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.817163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.817191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.817475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.817506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.818069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.818097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.818601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.818631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.819096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.819123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.819707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.819736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.820240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.820270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.820783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.820823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.821468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.821520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.822060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.822089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.822231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.822259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.822591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.822619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.823155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.823183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.823720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.823748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.824263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.824290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.824812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.824840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.825252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.825280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.825789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.825819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.826123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.826150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.826649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.826677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.827235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.827268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.827862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.827892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.828414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.828443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.828962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.828991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.829632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.829734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.830189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.830225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.830854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.830956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.831339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.831374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.831923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.831955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.832391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.832466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.833007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.833035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.833643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.833746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.834372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.834438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.834854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.834882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.835318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.835347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.836012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.420 [2024-06-09 09:14:06.836115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.420 qpair failed and we were unable to recover it. 00:35:44.420 [2024-06-09 09:14:06.836816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.836917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.837623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.837725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.838340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.838376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.838933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.838964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.839601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.839703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.840322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.840357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.840900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.840931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.841443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.841477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.841992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.842021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.842550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.842582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.843015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.843044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.843578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.843619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.844014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.844044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.844447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.844476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.844999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.845027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.845563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.845590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.845785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.845812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.846295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.846324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.846825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.846854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.847362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.847390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.847907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.847936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.848355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.848387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.848914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.848944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.849244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.849272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.849804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.849834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.850347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.850375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.850892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.850922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.851648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.851750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.852370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.852424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.853004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.853034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.853469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.853521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.854060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.854088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.854700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.854803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.855255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.855291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.855810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.855842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.856350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.856377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.856892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.856923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.857197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.421 [2024-06-09 09:14:06.857225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.421 qpair failed and we were unable to recover it. 00:35:44.421 [2024-06-09 09:14:06.857608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.857640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.858070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.858119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.858587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.858619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.859141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.859169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.859701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.859731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.860006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.860032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.860441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.860481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.860766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.860793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.861083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.861110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.861480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.861510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.862007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.862035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.862545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.862573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.863064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.863091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.863381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.863431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.863938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.863968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.864463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.864493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.865019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.865046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.865568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.865597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.866112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.866140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.866667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.866697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.867209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.867237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.867866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.867968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.868256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.868293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.868808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.868840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.869221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.869249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.869660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.869689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.870214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.870243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.870751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.870782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.871202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.871230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.871743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.871773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.872261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.872289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.872790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.422 [2024-06-09 09:14:06.872818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.422 qpair failed and we were unable to recover it. 00:35:44.422 [2024-06-09 09:14:06.873311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.873338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.873859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.873888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.874395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.874457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.874947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.874976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.875601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.875692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.876288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.876323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.876723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.876754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.877249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.877277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.877570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.877599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.877995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.878028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.878311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.878337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.878719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.878761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.879254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.879283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.879677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.879707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.880086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.880113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.880495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.880523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.880896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.880924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.881517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.881546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.882073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.882101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.882356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.882382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.882675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.882704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.883197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.883240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.883382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.883423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.883948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.883977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.884477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.884506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.885008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.885035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.885542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.885571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.886130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.886158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.886678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.886707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.887259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.887286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.887800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.887830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.888322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.888350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.888844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.888873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.889371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.889400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.889932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.889960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.890675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.890766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.891238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.891273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.891758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.891789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.423 [2024-06-09 09:14:06.892159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.423 [2024-06-09 09:14:06.892186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.423 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.892580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.892608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.893086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.893114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.893623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.893651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.894149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.894176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.894789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.894880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.895430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.895467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.895716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.895744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.896270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.896297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.896589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.896620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.897141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.897170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.897470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.897514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.898066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.898094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.898287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.898314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.898803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.898833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.899342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.899368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.899887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.899915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.900307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.900334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.900869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.900898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.901396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.901435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.901945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.901972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.902365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.902412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.902727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.902755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.903271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.903306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.903883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.903912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.904298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.904324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.904827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.904855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.905311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.905339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.905844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.905872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.906380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.906422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.906902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.906932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.907427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.907456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.907964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.907991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.908282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.908310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.908989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.909080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.909809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.909900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.910597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.910688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.911290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.911325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.911812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.424 [2024-06-09 09:14:06.911843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.424 qpair failed and we were unable to recover it. 00:35:44.424 [2024-06-09 09:14:06.912340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.912367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.912870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.912899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.913081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.913109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.913628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.913658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.913919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.913945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.914455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.914484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.914997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.915024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.915512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.915541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.916052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.916079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.916565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.916593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.917085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.917113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.917510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.917539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.917920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.917948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.918438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.918466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.918727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.918754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.919271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.919298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.919834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.919862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.920354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.920382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.920979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.921007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.921509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.921538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.922054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.922081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.922529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.922558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.923117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.923145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.923557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.923586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.923978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.924017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.924520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.924549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.925072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.925099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.925614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.925642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.926133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.926160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.926569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.926596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.927082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.927109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.927598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.927626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.927792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.927818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.928315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.928342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.928851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.928879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.929383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.929421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.929903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.929930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.930342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.425 [2024-06-09 09:14:06.930369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.425 qpair failed and we were unable to recover it. 00:35:44.425 [2024-06-09 09:14:06.930969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.930998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.931388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.931437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.931856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.931883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.932381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.932421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.932914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.932941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.933296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.933323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.933779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.933870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.934200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.934235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.934746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.934776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.935296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.935324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.935817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.935847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.936337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.936365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.936939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.936968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.937599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.937688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.938052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.938086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.938619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.938649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.939164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.939192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.939780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.939869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.940640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.940729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.941209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.941243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.941590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.941622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.941989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.942016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.942377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.942416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.942776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.942806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.943326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.943354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.943662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.943691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.944148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.944200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.944677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.944708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.945089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.945117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.945640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.945669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.946224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.946252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.946652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.946680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.426 [2024-06-09 09:14:06.947230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.426 [2024-06-09 09:14:06.947258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.426 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.947626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.947661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.948067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.948095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.948386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.948424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.948914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.948941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.949443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.949472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.949757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.949784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.950286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.950312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.950802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.950831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.951366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.951394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.951895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.951923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.952413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.952442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.952953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.952980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.953626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.953712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.954066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.954101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.954586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.954616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:44.427 [2024-06-09 09:14:06.955144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.955173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:35:44.427 [2024-06-09 09:14:06.955578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.955607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:44.427 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:44.427 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.427 [2024-06-09 09:14:06.956121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.956149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.956536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.956575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.956971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.956999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.957456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.957486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.957964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.957992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.958465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.958493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.958781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.958807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.959315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.959342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.959889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.959919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.960486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.960515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.960780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.960806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.961317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.427 [2024-06-09 09:14:06.961345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.427 qpair failed and we were unable to recover it. 00:35:44.427 [2024-06-09 09:14:06.961782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.428 [2024-06-09 09:14:06.961811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.428 qpair failed and we were unable to recover it. 00:35:44.428 [2024-06-09 09:14:06.962313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.428 [2024-06-09 09:14:06.962340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.428 qpair failed and we were unable to recover it. 00:35:44.428 [2024-06-09 09:14:06.962721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.428 [2024-06-09 09:14:06.962749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.428 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.963124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.963153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.963664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.963692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.963874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.963901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.964310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.964338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.964861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.964890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.965398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.965441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.965825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.965852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.966347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.966375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.966876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.966909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.967299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.967338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.967757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.967788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.968261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.968288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.968703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.968743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.969236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.969265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.694 qpair failed and we were unable to recover it. 00:35:44.694 [2024-06-09 09:14:06.969784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.694 [2024-06-09 09:14:06.969813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.970195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.970222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.970731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.970759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.971241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.971269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.971728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.971755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.972050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.972081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.972596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.972628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.973123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.973151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.973644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.973673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.974054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.974082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.974561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.974589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.975100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.975128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.975599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.975635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.976145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.976175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.976595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.976623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.977109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.977136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.977720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.977808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.978164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.978198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.978775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.978808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.979315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.979342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.979738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.979780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.980277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.980306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.980791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.980821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.981334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.981361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.981902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.981932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.982422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.982451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.983041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.983069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.983327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.983353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.983912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.984000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.984682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.984769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.985109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.985143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.985671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.985702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.986192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.986220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.986814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.986901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.987599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.987686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.988146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.988181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.988635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.988665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.989113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.695 [2024-06-09 09:14:06.989141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.695 qpair failed and we were unable to recover it. 00:35:44.695 [2024-06-09 09:14:06.989539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.989574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.990073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.990105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.990629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.990658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.991144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.991173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.991765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.991852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.992216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.992251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.992741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.992774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.993035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.993062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.993473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.993503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.993887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.993918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.994060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.994087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.994499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.994528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.995052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.995080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:44.696 [2024-06-09 09:14:06.995586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.995615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:44.696 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.696 [2024-06-09 09:14:06.996111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.996140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 09:14:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.696 [2024-06-09 09:14:06.996636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.996665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.997060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.997087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.997574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.997603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.998092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.998119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.998621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.998649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.999137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.999164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:06.999647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:06.999675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.000185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.000213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.000845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.000932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.001621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.001710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.002188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.002223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.002483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.002513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.002901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.002929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.003188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.003214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.003739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.003769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.004139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.004167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.004688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.004717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.005215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.005243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.005739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.005768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.006272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.006300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.006833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.696 [2024-06-09 09:14:07.006862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.696 qpair failed and we were unable to recover it. 00:35:44.696 [2024-06-09 09:14:07.007205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.007233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.007698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.007727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.007919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.007946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.008257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.008286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.008783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.008813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.009101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.009129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.009607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.009636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.010139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.010168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.010695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.010724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.011231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.011258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.011643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.011671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.012170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.012198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.012692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.012721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 Malloc0 00:35:44.697 [2024-06-09 09:14:07.013129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.013156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.013362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.013388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.697 [2024-06-09 09:14:07.013972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.014001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:44.697 [2024-06-09 09:14:07.014278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.014318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.697 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.697 [2024-06-09 09:14:07.014807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.014836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.015227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.015263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.015759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.015788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.016014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.016042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.016537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.016567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.017082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.017110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.017374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.017400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.017728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.017755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.018266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.018292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.018793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.018821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.019336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.019362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.019881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.019910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.020271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.020297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.020334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.697 [2024-06-09 09:14:07.020743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.020771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.021255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.021282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.021784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.021812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.022290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.022319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.022802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.022831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.023315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.023343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.697 qpair failed and we were unable to recover it. 00:35:44.697 [2024-06-09 09:14:07.023915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.697 [2024-06-09 09:14:07.023943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.024645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.024732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.025327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.025362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.025894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.025926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.026189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.026216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.026397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.026446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.026698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.026726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.027213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.027241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.027743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.027830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.028430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.028468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.028971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.029000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.698 [2024-06-09 09:14:07.029683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:44.698 [2024-06-09 09:14:07.029771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.698 [2024-06-09 09:14:07.030150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.030186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.698 [2024-06-09 09:14:07.030818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.030904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.031628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.031717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.032205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.032240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.032753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.032786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.033173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.033202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.033706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.033735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.034310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.034339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.034846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.034876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.035228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.035255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.035669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.035698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.036214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.036242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.036509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.036537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.037043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.037073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.037587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.037616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.037873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.037900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.038423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.038453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.038945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.038972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.039641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.039729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.040188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.040229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.040829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.040862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 [2024-06-09 09:14:07.041353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.698 [2024-06-09 09:14:07.041381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:44.698 [2024-06-09 09:14:07.041790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.041818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.698 qpair failed and we were unable to recover it. 00:35:44.698 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.698 [2024-06-09 09:14:07.042100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.698 [2024-06-09 09:14:07.042128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.699 [2024-06-09 09:14:07.042517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.042550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.042931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.042959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.043373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.043400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.043940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.043968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.044353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.044380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.044764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.044793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.045301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.045330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.045848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.045877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.046367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.046394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.046771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.046799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.047353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.047381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.047896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.047925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.048437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.048469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.048974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.049002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.049495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.049525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.050001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.050028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.050654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.050742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.051339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.051374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.051908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.051939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.052461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.052505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.052918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.052945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.699 [2024-06-09 09:14:07.053437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.053474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.699 [2024-06-09 09:14:07.053945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.053973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.699 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.699 [2024-06-09 09:14:07.054459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.054488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.054888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.054915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.055427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.055456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.055970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.055998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.699 [2024-06-09 09:14:07.056388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.699 [2024-06-09 09:14:07.056438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.699 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.056944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.056972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.057363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.057415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.057735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.057764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.058277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.058305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.058889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.058918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.059418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.059447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.059785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.059813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.060379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.060415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.060624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.700 [2024-06-09 09:14:07.060673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.700 [2024-06-09 09:14:07.060705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6df0000b90 with addr=10.0.0.2, port=4420 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.700 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:44.700 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.700 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.700 [2024-06-09 09:14:07.071334] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.071617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.071669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.071693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.071713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.071766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.700 09:14:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2847023 00:35:44.700 [2024-06-09 09:14:07.081251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.081467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.081508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.081524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.081536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.081568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.091204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.091345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.091369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.091379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.091388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.091417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.101201] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.101316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.101335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.101342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.101348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.101365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.111370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.111488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.111507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.111514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.111521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.111538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.121238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.121336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.121354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.121361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.121367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.121386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.131269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.131380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.131398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.131413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.131420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.131436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.141244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.141357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.141375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.141382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.141388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.141409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.151332] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.151447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.700 [2024-06-09 09:14:07.151465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.700 [2024-06-09 09:14:07.151472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.700 [2024-06-09 09:14:07.151479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.700 [2024-06-09 09:14:07.151495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.700 qpair failed and we were unable to recover it. 00:35:44.700 [2024-06-09 09:14:07.161230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.700 [2024-06-09 09:14:07.161331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.161349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.161356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.161363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.161378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.701 [2024-06-09 09:14:07.171451] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.701 [2024-06-09 09:14:07.171556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.171577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.171585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.171590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.171607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.701 [2024-06-09 09:14:07.181353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.701 [2024-06-09 09:14:07.181472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.181490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.181497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.181503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.181520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.701 [2024-06-09 09:14:07.191409] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.701 [2024-06-09 09:14:07.191523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.191541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.191549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.191555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.191571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.701 [2024-06-09 09:14:07.201437] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.701 [2024-06-09 09:14:07.201574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.201592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.201599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.201605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.201621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.701 [2024-06-09 09:14:07.211480] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.701 [2024-06-09 09:14:07.211582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.211600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.211608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.211614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.211634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.701 [2024-06-09 09:14:07.221454] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.701 [2024-06-09 09:14:07.221562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.221580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.221588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.221594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.221610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.701 [2024-06-09 09:14:07.231501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.701 [2024-06-09 09:14:07.231613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.231632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.231639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.231645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.231661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.701 [2024-06-09 09:14:07.241543] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.701 [2024-06-09 09:14:07.241646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.701 [2024-06-09 09:14:07.241664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.701 [2024-06-09 09:14:07.241672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.701 [2024-06-09 09:14:07.241678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.701 [2024-06-09 09:14:07.241694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.701 qpair failed and we were unable to recover it. 00:35:44.964 [2024-06-09 09:14:07.251557] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.964 [2024-06-09 09:14:07.251662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.964 [2024-06-09 09:14:07.251680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.964 [2024-06-09 09:14:07.251688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.964 [2024-06-09 09:14:07.251694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.964 [2024-06-09 09:14:07.251709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.964 qpair failed and we were unable to recover it. 00:35:44.964 [2024-06-09 09:14:07.261592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.964 [2024-06-09 09:14:07.261701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.964 [2024-06-09 09:14:07.261719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.964 [2024-06-09 09:14:07.261726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.964 [2024-06-09 09:14:07.261732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.964 [2024-06-09 09:14:07.261747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.964 qpair failed and we were unable to recover it. 00:35:44.964 [2024-06-09 09:14:07.271644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.964 [2024-06-09 09:14:07.271756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.964 [2024-06-09 09:14:07.271773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.964 [2024-06-09 09:14:07.271781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.964 [2024-06-09 09:14:07.271787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.964 [2024-06-09 09:14:07.271802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.964 qpair failed and we were unable to recover it. 00:35:44.964 [2024-06-09 09:14:07.281687] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.964 [2024-06-09 09:14:07.281797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.964 [2024-06-09 09:14:07.281815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.964 [2024-06-09 09:14:07.281823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.964 [2024-06-09 09:14:07.281829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.281845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.291597] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.291700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.291718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.291726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.291732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.291749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.301683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.301794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.301812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.301820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.301829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.301845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.311751] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.311863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.311880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.311887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.311894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.311909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.321659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.321760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.321778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.321785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.321791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.321807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.331763] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.331874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.331892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.331899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.331905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.331920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.341907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.342025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.342042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.342050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.342056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.342071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.351915] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.352027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.352044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.352052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.352058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.352074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.361914] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.362026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.362052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.362061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.362068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.362088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.371975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.372084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.372109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.372119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.372126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.372146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.381916] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.382027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.382053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.382062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.382069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.382090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.391959] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.392106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.392132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.392149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.392156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.392176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.401998] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.402106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.402132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.402141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.402148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.402169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.412016] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.412128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.965 [2024-06-09 09:14:07.412154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.965 [2024-06-09 09:14:07.412163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.965 [2024-06-09 09:14:07.412169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.965 [2024-06-09 09:14:07.412190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.965 qpair failed and we were unable to recover it. 00:35:44.965 [2024-06-09 09:14:07.422048] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.965 [2024-06-09 09:14:07.422154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.422174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.422181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.422188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.422205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.432064] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.432172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.432191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.432199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.432205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.432221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.442136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.442257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.442275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.442283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.442289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.442305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.452107] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.452211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.452229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.452237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.452242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.452259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.462158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.462264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.462282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.462290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.462296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.462311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.472151] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.472270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.472288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.472296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.472302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.472318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.482195] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.482304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.482325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.482333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.482339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.482355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.492221] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.492325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.492343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.492350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.492356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.492372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.502230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.502340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.502358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.502365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.502372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.502388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:44.966 [2024-06-09 09:14:07.512312] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:44.966 [2024-06-09 09:14:07.512428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:44.966 [2024-06-09 09:14:07.512446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:44.966 [2024-06-09 09:14:07.512454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:44.966 [2024-06-09 09:14:07.512460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:44.966 [2024-06-09 09:14:07.512477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:44.966 qpair failed and we were unable to recover it. 00:35:45.229 [2024-06-09 09:14:07.522312] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.229 [2024-06-09 09:14:07.522426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.229 [2024-06-09 09:14:07.522444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.229 [2024-06-09 09:14:07.522451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.229 [2024-06-09 09:14:07.522457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.229 [2024-06-09 09:14:07.522477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.229 qpair failed and we were unable to recover it. 00:35:45.229 [2024-06-09 09:14:07.532406] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.229 [2024-06-09 09:14:07.532514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.229 [2024-06-09 09:14:07.532533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.229 [2024-06-09 09:14:07.532541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.229 [2024-06-09 09:14:07.532547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.229 [2024-06-09 09:14:07.532563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.229 qpair failed and we were unable to recover it. 00:35:45.229 [2024-06-09 09:14:07.542413] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.229 [2024-06-09 09:14:07.542528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.229 [2024-06-09 09:14:07.542546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.229 [2024-06-09 09:14:07.542553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.229 [2024-06-09 09:14:07.542559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.229 [2024-06-09 09:14:07.542575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.229 qpair failed and we were unable to recover it. 00:35:45.229 [2024-06-09 09:14:07.552351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.229 [2024-06-09 09:14:07.552467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.229 [2024-06-09 09:14:07.552485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.229 [2024-06-09 09:14:07.552492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.229 [2024-06-09 09:14:07.552498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.229 [2024-06-09 09:14:07.552514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.229 qpair failed and we were unable to recover it. 00:35:45.229 [2024-06-09 09:14:07.562431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.229 [2024-06-09 09:14:07.562537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.229 [2024-06-09 09:14:07.562554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.229 [2024-06-09 09:14:07.562561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.229 [2024-06-09 09:14:07.562568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.229 [2024-06-09 09:14:07.562584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.229 qpair failed and we were unable to recover it. 00:35:45.229 [2024-06-09 09:14:07.572362] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.229 [2024-06-09 09:14:07.572467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.229 [2024-06-09 09:14:07.572489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.229 [2024-06-09 09:14:07.572497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.229 [2024-06-09 09:14:07.572502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.229 [2024-06-09 09:14:07.572519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.229 qpair failed and we were unable to recover it. 00:35:45.229 [2024-06-09 09:14:07.582547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.229 [2024-06-09 09:14:07.582680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.229 [2024-06-09 09:14:07.582698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.229 [2024-06-09 09:14:07.582705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.229 [2024-06-09 09:14:07.582711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.229 [2024-06-09 09:14:07.582728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.229 qpair failed and we were unable to recover it. 00:35:45.229 [2024-06-09 09:14:07.592499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.592611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.592628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.592636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.592642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.592658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.602558] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.602666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.602683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.602691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.602697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.602712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.612609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.612724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.612742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.612750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.612755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.612775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.622479] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.622586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.622604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.622611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.622618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.622634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.632544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.632691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.632708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.632716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.632722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.632737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.642548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.642647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.642665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.642673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.642679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.642695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.652679] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.652777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.652795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.652803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.652809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.652825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.662719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.662825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.662846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.662854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.662860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.662876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.672722] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.672833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.672850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.672858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.672864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.672880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.682760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.682866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.682884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.682891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.682897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.682913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.692769] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.692874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.692892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.692900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.692906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.692922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.702803] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.702915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.702933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.702940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.702950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.702965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.712879] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.712991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.713017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.713026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.230 [2024-06-09 09:14:07.713033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.230 [2024-06-09 09:14:07.713054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.230 qpair failed and we were unable to recover it. 00:35:45.230 [2024-06-09 09:14:07.722901] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.230 [2024-06-09 09:14:07.723017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.230 [2024-06-09 09:14:07.723043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.230 [2024-06-09 09:14:07.723053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.231 [2024-06-09 09:14:07.723059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.231 [2024-06-09 09:14:07.723080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.231 qpair failed and we were unable to recover it. 00:35:45.231 [2024-06-09 09:14:07.732874] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.231 [2024-06-09 09:14:07.732985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.231 [2024-06-09 09:14:07.733012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.231 [2024-06-09 09:14:07.733021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.231 [2024-06-09 09:14:07.733027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.231 [2024-06-09 09:14:07.733048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.231 qpair failed and we were unable to recover it. 00:35:45.231 [2024-06-09 09:14:07.742933] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.231 [2024-06-09 09:14:07.743048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.231 [2024-06-09 09:14:07.743074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.231 [2024-06-09 09:14:07.743083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.231 [2024-06-09 09:14:07.743089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.231 [2024-06-09 09:14:07.743110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.231 qpair failed and we were unable to recover it. 00:35:45.231 [2024-06-09 09:14:07.752935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.231 [2024-06-09 09:14:07.753056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.231 [2024-06-09 09:14:07.753076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.231 [2024-06-09 09:14:07.753084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.231 [2024-06-09 09:14:07.753091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.231 [2024-06-09 09:14:07.753112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.231 qpair failed and we were unable to recover it. 00:35:45.231 [2024-06-09 09:14:07.763042] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.231 [2024-06-09 09:14:07.763155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.231 [2024-06-09 09:14:07.763174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.231 [2024-06-09 09:14:07.763181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.231 [2024-06-09 09:14:07.763188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.231 [2024-06-09 09:14:07.763204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.231 qpair failed and we were unable to recover it. 00:35:45.231 [2024-06-09 09:14:07.773003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.231 [2024-06-09 09:14:07.773119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.231 [2024-06-09 09:14:07.773145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.231 [2024-06-09 09:14:07.773154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.231 [2024-06-09 09:14:07.773161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.231 [2024-06-09 09:14:07.773181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.231 qpair failed and we were unable to recover it. 00:35:45.231 [2024-06-09 09:14:07.783088] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.231 [2024-06-09 09:14:07.783219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.231 [2024-06-09 09:14:07.783245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.231 [2024-06-09 09:14:07.783254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.231 [2024-06-09 09:14:07.783261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.231 [2024-06-09 09:14:07.783283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.231 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.793058] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.793176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.793196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.793209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.793215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.793233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.803077] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.803182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.803200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.803208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.803214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.803230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.813101] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.813207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.813225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.813232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.813238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.813254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.823125] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.823238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.823255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.823263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.823269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.823284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.833161] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.833278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.833296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.833304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.833310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.833326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.843256] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.843386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.843409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.843417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.843423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.843439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.853135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.853238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.853256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.853264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.853270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.853286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.863249] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.863358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.863375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.863383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.863389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.863410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.873200] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.873308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.873326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.873334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.873340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.873355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.883309] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.883418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.883436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.883447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.883453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.883470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.893226] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.893351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.893368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.893376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.893382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.893398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.903327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.903438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.903456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.903464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.903470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.903486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.913292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.913442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.913460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.913467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.913473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.913491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.923412] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.923516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.923534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.923541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.923547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.923563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.933468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.933580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.933599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.933607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.933613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.933633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.494 qpair failed and we were unable to recover it. 00:35:45.494 [2024-06-09 09:14:07.943512] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.494 [2024-06-09 09:14:07.943661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.494 [2024-06-09 09:14:07.943679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.494 [2024-06-09 09:14:07.943686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.494 [2024-06-09 09:14:07.943693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.494 [2024-06-09 09:14:07.943709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:07.953526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:07.953637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:07.953655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:07.953662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:07.953669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:07.953684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:07.963561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:07.963666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:07.963684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:07.963692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:07.963698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:07.963714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:07.973535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:07.973640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:07.973661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:07.973669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:07.973674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:07.973690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:07.983614] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:07.983716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:07.983734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:07.983741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:07.983747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:07.983763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:07.993628] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:07.993740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:07.993757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:07.993764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:07.993770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:07.993786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:08.003665] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:08.003767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:08.003784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:08.003792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:08.003798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:08.003814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:08.013696] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:08.013800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:08.013818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:08.013825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:08.013832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:08.013852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:08.023710] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:08.023821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:08.023838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:08.023846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:08.023852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:08.023868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:08.033754] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:08.033863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:08.033881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:08.033888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:08.033894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:08.033909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.495 [2024-06-09 09:14:08.043670] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.495 [2024-06-09 09:14:08.043774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.495 [2024-06-09 09:14:08.043791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.495 [2024-06-09 09:14:08.043799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.495 [2024-06-09 09:14:08.043805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.495 [2024-06-09 09:14:08.043821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.495 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.053781] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.053892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.053910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.053917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.053924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.053940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.063847] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.063962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.063994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.064004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.064011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.064032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.073831] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.073966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.073985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.073993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.074000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.074017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.083933] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.084066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.084091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.084101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.084108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.084129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.093929] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.094030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.094056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.094065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.094072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.094092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.103953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.104062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.104088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.104098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.104113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.104134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.113960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.114076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.114095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.114103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.114109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.114126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.124005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.124116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.124143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.124152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.124159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.124180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.134007] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.134114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.134134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.134142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.134148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.134165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.144065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.144178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.144204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.144214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.144220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.144242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.154101] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.154268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.154288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.154296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.759 [2024-06-09 09:14:08.154302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.759 [2024-06-09 09:14:08.154320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.759 qpair failed and we were unable to recover it. 00:35:45.759 [2024-06-09 09:14:08.164021] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.759 [2024-06-09 09:14:08.164125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.759 [2024-06-09 09:14:08.164143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.759 [2024-06-09 09:14:08.164151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.164157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.164173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.174022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.174124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.174143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.174151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.174157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.174173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.184179] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.184284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.184303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.184311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.184317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.184333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.194147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.194260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.194278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.194291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.194297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.194314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.204214] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.204319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.204338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.204346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.204352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.204369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.214240] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.214368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.214388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.214395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.214407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.214426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.224291] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.224396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.224420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.224428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.224434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.224460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.234304] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.234422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.234443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.234451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.234457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.234475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.244253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.244355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.244375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.244384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.244390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.244412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.254349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.254481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.254501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.254508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.254514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.254532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.264386] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.264495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.264516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.264524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.264530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.264547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.274417] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.274533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.274553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.274561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.274567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.274585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.284440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.284553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.284574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.284586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.284592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.284611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.760 [2024-06-09 09:14:08.294458] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.760 [2024-06-09 09:14:08.294576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.760 [2024-06-09 09:14:08.294597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.760 [2024-06-09 09:14:08.294606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.760 [2024-06-09 09:14:08.294612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.760 [2024-06-09 09:14:08.294630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.760 qpair failed and we were unable to recover it. 00:35:45.761 [2024-06-09 09:14:08.304495] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.761 [2024-06-09 09:14:08.304610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.761 [2024-06-09 09:14:08.304632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.761 [2024-06-09 09:14:08.304640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.761 [2024-06-09 09:14:08.304647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.761 [2024-06-09 09:14:08.304665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.761 qpair failed and we were unable to recover it. 00:35:45.761 [2024-06-09 09:14:08.314661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.761 [2024-06-09 09:14:08.314780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.761 [2024-06-09 09:14:08.314803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.761 [2024-06-09 09:14:08.314811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.761 [2024-06-09 09:14:08.314817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:45.761 [2024-06-09 09:14:08.314837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.761 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.324599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.324715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.324739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.324747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.324753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.324772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.334633] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.334887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.334911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.334919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.334925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.334945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.344755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.344880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.344914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.344924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.344931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.344958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.354750] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.354906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.354933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.354942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.354948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.354970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.364729] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.364861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.364897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.364908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.364914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.364941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.374717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.374851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.374897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.374909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.374916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.374944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.384715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.384863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.384901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.384911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.384918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.384945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.394823] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.394957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.394990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.394999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.395006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.395032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.404822] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.404948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.404979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.404989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.404995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.405018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.414846] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.414971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.415001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.415010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.415017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.415049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.424879] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.425002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.425032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.425041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.425048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.425071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.434920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.435057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.435087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.025 [2024-06-09 09:14:08.435097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.025 [2024-06-09 09:14:08.435103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.025 [2024-06-09 09:14:08.435126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-06-09 09:14:08.444925] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.025 [2024-06-09 09:14:08.445081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.025 [2024-06-09 09:14:08.445111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.445121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.445127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.445150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.454889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.455012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.455042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.455051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.455058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.455081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.464906] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.465024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.465060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.465069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.465075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.465098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.475031] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.475155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.475185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.475195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.475201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.475225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.485084] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.485199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.485228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.485237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.485244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.485266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.495077] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.495195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.495224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.495234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.495240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.495263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.505219] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.505364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.505394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.505413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.505427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.505451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.515210] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.515358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.515389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.515416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.515424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.515450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.525217] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.525358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.525388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.525397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.525412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.525435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.535281] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.535465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.535495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.535504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.535510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.535533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.545251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.545383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.545424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.545435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.545441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.545464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.555279] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.555434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.555463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.555473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.555479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.026 [2024-06-09 09:14:08.555502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-06-09 09:14:08.565327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.026 [2024-06-09 09:14:08.565477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.026 [2024-06-09 09:14:08.565507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.026 [2024-06-09 09:14:08.565517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.026 [2024-06-09 09:14:08.565523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.027 [2024-06-09 09:14:08.565547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-06-09 09:14:08.575369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.027 [2024-06-09 09:14:08.575497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.027 [2024-06-09 09:14:08.575527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.027 [2024-06-09 09:14:08.575536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.027 [2024-06-09 09:14:08.575542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.027 [2024-06-09 09:14:08.575565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.290 [2024-06-09 09:14:08.585368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.290 [2024-06-09 09:14:08.585510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.290 [2024-06-09 09:14:08.585541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.290 [2024-06-09 09:14:08.585550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.290 [2024-06-09 09:14:08.585556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.585580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.595398] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.595547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.595577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.595587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.595600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.595624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.605598] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.605769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.605798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.605807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.605814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.605837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.615468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.615601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.615631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.615641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.615648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.615671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.625544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.625676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.625707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.625715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.625722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.625745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.635520] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.635662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.635692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.635702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.635708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.635731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.645568] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.645695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.645725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.645735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.645741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.645764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.655556] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.655682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.655713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.655722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.655729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.655753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.665617] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.665737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.665766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.665776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.665782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.665804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.675689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.675827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.675857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.675867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.675873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.675895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.685644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.685777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.685806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.685823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.685829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.685853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.695653] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.695832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.695862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.695872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.695878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.695903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.705632] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.705760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.705790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.705800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.705807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.291 [2024-06-09 09:14:08.705829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.291 qpair failed and we were unable to recover it. 00:35:46.291 [2024-06-09 09:14:08.715661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.291 [2024-06-09 09:14:08.715831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.291 [2024-06-09 09:14:08.715861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.291 [2024-06-09 09:14:08.715871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.291 [2024-06-09 09:14:08.715877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.715899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.725778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.725910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.725942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.725952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.725958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.725981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.735726] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.735860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.735890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.735900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.735906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.735929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.745885] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.746019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.746060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.746072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.746078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.746107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.755820] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.755997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.756038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.756050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.756057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.756087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.765876] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.766014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.766055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.766066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.766073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.766103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.775915] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.776050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.776099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.776111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.776118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.776148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.785970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.786103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.786144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.786156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.786162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.786191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.796059] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.796213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.796254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.796265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.796272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.796299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.805929] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.806046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.806079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.806089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.806095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.806120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.816043] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.816302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.816332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.816342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.816349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.292 [2024-06-09 09:14:08.816378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.292 qpair failed and we were unable to recover it. 00:35:46.292 [2024-06-09 09:14:08.826110] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.292 [2024-06-09 09:14:08.826231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.292 [2024-06-09 09:14:08.826261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.292 [2024-06-09 09:14:08.826271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.292 [2024-06-09 09:14:08.826277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.293 [2024-06-09 09:14:08.826300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.293 qpair failed and we were unable to recover it. 00:35:46.293 [2024-06-09 09:14:08.836150] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.293 [2024-06-09 09:14:08.836302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.293 [2024-06-09 09:14:08.836333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.293 [2024-06-09 09:14:08.836343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.293 [2024-06-09 09:14:08.836349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.293 [2024-06-09 09:14:08.836373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.293 qpair failed and we were unable to recover it. 00:35:46.293 [2024-06-09 09:14:08.846155] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.293 [2024-06-09 09:14:08.846278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.293 [2024-06-09 09:14:08.846308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.293 [2024-06-09 09:14:08.846318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.293 [2024-06-09 09:14:08.846324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.293 [2024-06-09 09:14:08.846347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.293 qpair failed and we were unable to recover it. 00:35:46.556 [2024-06-09 09:14:08.856170] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.556 [2024-06-09 09:14:08.856296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.556 [2024-06-09 09:14:08.856327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.556 [2024-06-09 09:14:08.856336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.556 [2024-06-09 09:14:08.856343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.556 [2024-06-09 09:14:08.856366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.556 qpair failed and we were unable to recover it. 00:35:46.556 [2024-06-09 09:14:08.866301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.556 [2024-06-09 09:14:08.866452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.556 [2024-06-09 09:14:08.866495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.556 [2024-06-09 09:14:08.866505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.866511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.866536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.876221] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.876359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.876388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.876398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.876415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.876438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.886290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.886447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.886480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.886494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.886500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.886526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.896303] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.896437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.896469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.896478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.896484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.896507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.906331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.906476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.906508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.906520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.906527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.906560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.916354] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.916489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.916520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.916530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.916537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.916560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.926289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.926422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.926453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.926463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.926469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.926492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.936428] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.936559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.936588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.936599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.936605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.936628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.946455] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.946709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.946738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.946747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.946753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.946775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.956509] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.956644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.956673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.956683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.956690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.956712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.966563] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.966728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.966758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.966767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.966775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.966799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.976610] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.976740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.976770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.976779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.976786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.976807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.986524] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.986647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.557 [2024-06-09 09:14:08.986678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.557 [2024-06-09 09:14:08.986688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.557 [2024-06-09 09:14:08.986694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.557 [2024-06-09 09:14:08.986717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.557 qpair failed and we were unable to recover it. 00:35:46.557 [2024-06-09 09:14:08.996625] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.557 [2024-06-09 09:14:08.996768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:08.996797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:08.996807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:08.996820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:08.996843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.006563] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.006695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.006726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.006736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.006742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.006766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.016711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.016839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.016869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.016879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.016885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.016907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.026738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.026858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.026888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.026898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.026904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.026927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.036711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.036855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.036897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.036908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.036916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.036946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.046826] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.046959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.047001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.047012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.047019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.047048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.056747] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.056927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.056959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.056969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.056977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.057002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.066753] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.066874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.066905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.066915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.066922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.066945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.076906] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.077046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.077076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.077085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.077091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.077121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.086903] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.087027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.087057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.087072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.087079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.087101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.096937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.097065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.097097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.097107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.097113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.097137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.558 [2024-06-09 09:14:09.107014] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.558 [2024-06-09 09:14:09.107139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.558 [2024-06-09 09:14:09.107169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.558 [2024-06-09 09:14:09.107178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.558 [2024-06-09 09:14:09.107185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.558 [2024-06-09 09:14:09.107209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.558 qpair failed and we were unable to recover it. 00:35:46.822 [2024-06-09 09:14:09.117055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.822 [2024-06-09 09:14:09.117221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.822 [2024-06-09 09:14:09.117251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.822 [2024-06-09 09:14:09.117261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.822 [2024-06-09 09:14:09.117268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.822 [2024-06-09 09:14:09.117292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.822 qpair failed and we were unable to recover it. 00:35:46.822 [2024-06-09 09:14:09.126978] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.822 [2024-06-09 09:14:09.127102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.822 [2024-06-09 09:14:09.127133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.822 [2024-06-09 09:14:09.127142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.822 [2024-06-09 09:14:09.127150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.822 [2024-06-09 09:14:09.127172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.822 qpair failed and we were unable to recover it. 00:35:46.822 [2024-06-09 09:14:09.137066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.822 [2024-06-09 09:14:09.137191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.822 [2024-06-09 09:14:09.137220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.822 [2024-06-09 09:14:09.137230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.822 [2024-06-09 09:14:09.137236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.822 [2024-06-09 09:14:09.137260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.822 qpair failed and we were unable to recover it. 00:35:46.822 [2024-06-09 09:14:09.147109] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.822 [2024-06-09 09:14:09.147229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.822 [2024-06-09 09:14:09.147260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.822 [2024-06-09 09:14:09.147270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.822 [2024-06-09 09:14:09.147276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.822 [2024-06-09 09:14:09.147299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.822 qpair failed and we were unable to recover it. 00:35:46.822 [2024-06-09 09:14:09.157184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.822 [2024-06-09 09:14:09.157322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.822 [2024-06-09 09:14:09.157352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.822 [2024-06-09 09:14:09.157362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.822 [2024-06-09 09:14:09.157368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.822 [2024-06-09 09:14:09.157390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.822 qpair failed and we were unable to recover it. 00:35:46.822 [2024-06-09 09:14:09.167140] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.822 [2024-06-09 09:14:09.167268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.822 [2024-06-09 09:14:09.167298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.822 [2024-06-09 09:14:09.167308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.822 [2024-06-09 09:14:09.167315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.822 [2024-06-09 09:14:09.167338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.822 qpair failed and we were unable to recover it. 00:35:46.822 [2024-06-09 09:14:09.177245] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.822 [2024-06-09 09:14:09.177377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.822 [2024-06-09 09:14:09.177425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.822 [2024-06-09 09:14:09.177437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.822 [2024-06-09 09:14:09.177443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.822 [2024-06-09 09:14:09.177467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.822 qpair failed and we were unable to recover it. 00:35:46.822 [2024-06-09 09:14:09.187245] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.822 [2024-06-09 09:14:09.187383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.187428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.187438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.187445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.187468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.197296] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.197463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.197494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.197504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.197511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.197534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.207321] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.207452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.207483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.207494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.207501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.207525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.217310] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.217441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.217471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.217481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.217487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.217517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.227367] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.227494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.227525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.227535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.227541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.227564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.237410] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.237549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.237578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.237588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.237594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.237618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.247340] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.247469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.247499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.247509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.247515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.247539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.257466] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.257593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.257623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.257634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.257640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.257663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.267513] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.267669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.267704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.267713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.267719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.267742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.277518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.277651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.277681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.277690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.277697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.277720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.287537] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.287663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.287693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.287703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.287709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.287733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.297466] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.297604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.297634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.297643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.297651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.297675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.307504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.307624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.307653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.307663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.307669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.307699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.823 qpair failed and we were unable to recover it. 00:35:46.823 [2024-06-09 09:14:09.317542] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.823 [2024-06-09 09:14:09.317680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.823 [2024-06-09 09:14:09.317709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.823 [2024-06-09 09:14:09.317719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.823 [2024-06-09 09:14:09.317726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.823 [2024-06-09 09:14:09.317749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.824 qpair failed and we were unable to recover it. 00:35:46.824 [2024-06-09 09:14:09.327660] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.824 [2024-06-09 09:14:09.327770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.824 [2024-06-09 09:14:09.327791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.824 [2024-06-09 09:14:09.327800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.824 [2024-06-09 09:14:09.327806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.824 [2024-06-09 09:14:09.327824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.824 qpair failed and we were unable to recover it. 00:35:46.824 [2024-06-09 09:14:09.337778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.824 [2024-06-09 09:14:09.337893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.824 [2024-06-09 09:14:09.337914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.824 [2024-06-09 09:14:09.337922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.824 [2024-06-09 09:14:09.337928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.824 [2024-06-09 09:14:09.337945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.824 qpair failed and we were unable to recover it. 00:35:46.824 [2024-06-09 09:14:09.347794] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.824 [2024-06-09 09:14:09.347908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.824 [2024-06-09 09:14:09.347928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.824 [2024-06-09 09:14:09.347936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.824 [2024-06-09 09:14:09.347942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.824 [2024-06-09 09:14:09.347959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.824 qpair failed and we were unable to recover it. 00:35:46.824 [2024-06-09 09:14:09.357643] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.824 [2024-06-09 09:14:09.357754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.824 [2024-06-09 09:14:09.357773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.824 [2024-06-09 09:14:09.357780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.824 [2024-06-09 09:14:09.357786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.824 [2024-06-09 09:14:09.357804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.824 qpair failed and we were unable to recover it. 00:35:46.824 [2024-06-09 09:14:09.367819] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.824 [2024-06-09 09:14:09.367926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.824 [2024-06-09 09:14:09.367945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.824 [2024-06-09 09:14:09.367952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.824 [2024-06-09 09:14:09.367958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.824 [2024-06-09 09:14:09.367974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.824 qpair failed and we were unable to recover it. 00:35:46.824 [2024-06-09 09:14:09.377812] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.824 [2024-06-09 09:14:09.377922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.824 [2024-06-09 09:14:09.377940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.824 [2024-06-09 09:14:09.377948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.824 [2024-06-09 09:14:09.377954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:46.824 [2024-06-09 09:14:09.377970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:46.824 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.387856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.387976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.387994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.388002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.388008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.388024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.397820] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.397927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.397945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.397953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.397963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.397979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.407874] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.407975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.407992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.408000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.408006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.408022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.417935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.418044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.418062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.418069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.418075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.418091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.427926] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.428029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.428047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.428054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.428060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.428075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.437947] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.438082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.438107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.438116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.438123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.438143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.447949] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.448056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.448074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.448082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.448089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.448105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.458011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.458121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.458147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.458155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.458162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.458183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.468041] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.468149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.468168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.468176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.468182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.468199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.477982] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.478091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.478117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.478126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.478132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.478153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.488060] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.488169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.488194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.488208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.488215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.488236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.498095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.498203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.498229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.498238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.498244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.498264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.508056] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.508155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.508175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.508183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.508189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.508206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.518105] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.518216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.518233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.518241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.518247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.518263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.528190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.528288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.528305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.528312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.528318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.528334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.538210] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.538313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.538330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.538338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.538344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.538359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.548196] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.548292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.548309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.548317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.548323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.548339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.558191] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.558291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.558307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.558315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.558321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.558336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.568254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.568397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.568420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.568427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.568433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.568450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.578324] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.578455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.578473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.578487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.578493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.578509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.588281] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.588376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.588394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.588406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.588413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.588429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.598318] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.598422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.598439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.598446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.598452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.598469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.608349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.608450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.608468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.608475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.608481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.608497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.618427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.618528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.618545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.618552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.618558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.618574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.628306] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.628408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.085 [2024-06-09 09:14:09.628426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.085 [2024-06-09 09:14:09.628434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.085 [2024-06-09 09:14:09.628440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.085 [2024-06-09 09:14:09.628458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.085 qpair failed and we were unable to recover it. 00:35:47.085 [2024-06-09 09:14:09.638433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.085 [2024-06-09 09:14:09.638543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.086 [2024-06-09 09:14:09.638561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.086 [2024-06-09 09:14:09.638569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.086 [2024-06-09 09:14:09.638575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.086 [2024-06-09 09:14:09.638591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.086 qpair failed and we were unable to recover it. 00:35:47.347 [2024-06-09 09:14:09.648415] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.347 [2024-06-09 09:14:09.648511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.347 [2024-06-09 09:14:09.648528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.347 [2024-06-09 09:14:09.648536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.347 [2024-06-09 09:14:09.648542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.347 [2024-06-09 09:14:09.648558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-06-09 09:14:09.658471] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.347 [2024-06-09 09:14:09.658589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.347 [2024-06-09 09:14:09.658606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.347 [2024-06-09 09:14:09.658614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.347 [2024-06-09 09:14:09.658620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.347 [2024-06-09 09:14:09.658636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-06-09 09:14:09.668521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.347 [2024-06-09 09:14:09.668621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.347 [2024-06-09 09:14:09.668641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.347 [2024-06-09 09:14:09.668649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.347 [2024-06-09 09:14:09.668655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.347 [2024-06-09 09:14:09.668671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-06-09 09:14:09.678542] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.347 [2024-06-09 09:14:09.678640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.347 [2024-06-09 09:14:09.678657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.347 [2024-06-09 09:14:09.678664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.347 [2024-06-09 09:14:09.678670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.347 [2024-06-09 09:14:09.678685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.347 qpair failed and we were unable to recover it. 00:35:47.347 [2024-06-09 09:14:09.688461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.688557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.688575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.688582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.688588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.688604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.698672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.698778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.698796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.698803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.698809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.698826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.708545] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.708647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.708664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.708671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.708678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.708697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.718656] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.718757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.718774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.718781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.718787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.718803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.728696] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.728795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.728812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.728820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.728826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.728841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.738681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.738801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.738819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.738826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.738832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.738848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.748749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.748847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.748864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.748871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.748877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.748893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.758743] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.758844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.758864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.758872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.758878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.758893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.768778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.768875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.768892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.768900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.768906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.768921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.778839] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.778937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.778954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.778961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.778967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.778983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.788865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.788970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.788987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.788994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.789001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.789017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.798878] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.798981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.798999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.799006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.799016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.799032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.808911] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.809022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.809048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.809057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.348 [2024-06-09 09:14:09.809064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.348 [2024-06-09 09:14:09.809085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.348 qpair failed and we were unable to recover it. 00:35:47.348 [2024-06-09 09:14:09.818970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.348 [2024-06-09 09:14:09.819080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.348 [2024-06-09 09:14:09.819105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.348 [2024-06-09 09:14:09.819114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.819121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.819141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-06-09 09:14:09.828985] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.349 [2024-06-09 09:14:09.829125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.349 [2024-06-09 09:14:09.829143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.349 [2024-06-09 09:14:09.829152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.829158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.829176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-06-09 09:14:09.838947] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.349 [2024-06-09 09:14:09.839057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.349 [2024-06-09 09:14:09.839082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.349 [2024-06-09 09:14:09.839091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.839098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.839119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-06-09 09:14:09.849016] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.349 [2024-06-09 09:14:09.849123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.349 [2024-06-09 09:14:09.849149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.349 [2024-06-09 09:14:09.849158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.849165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.849186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-06-09 09:14:09.859085] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.349 [2024-06-09 09:14:09.859191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.349 [2024-06-09 09:14:09.859216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.349 [2024-06-09 09:14:09.859225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.859232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.859253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-06-09 09:14:09.869118] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.349 [2024-06-09 09:14:09.869221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.349 [2024-06-09 09:14:09.869246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.349 [2024-06-09 09:14:09.869255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.869262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.869282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-06-09 09:14:09.879103] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.349 [2024-06-09 09:14:09.879206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.349 [2024-06-09 09:14:09.879224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.349 [2024-06-09 09:14:09.879232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.879238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.879255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-06-09 09:14:09.889136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.349 [2024-06-09 09:14:09.889234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.349 [2024-06-09 09:14:09.889252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.349 [2024-06-09 09:14:09.889264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.889271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.889287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.349 [2024-06-09 09:14:09.899125] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.349 [2024-06-09 09:14:09.899208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.349 [2024-06-09 09:14:09.899225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.349 [2024-06-09 09:14:09.899232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.349 [2024-06-09 09:14:09.899238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.349 [2024-06-09 09:14:09.899254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.349 qpair failed and we were unable to recover it. 00:35:47.611 [2024-06-09 09:14:09.909179] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.611 [2024-06-09 09:14:09.909275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.611 [2024-06-09 09:14:09.909293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.611 [2024-06-09 09:14:09.909300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.611 [2024-06-09 09:14:09.909306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.611 [2024-06-09 09:14:09.909322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.611 qpair failed and we were unable to recover it. 00:35:47.611 [2024-06-09 09:14:09.919163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.611 [2024-06-09 09:14:09.919264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.611 [2024-06-09 09:14:09.919281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.611 [2024-06-09 09:14:09.919289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.611 [2024-06-09 09:14:09.919295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.611 [2024-06-09 09:14:09.919311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.611 qpair failed and we were unable to recover it. 00:35:47.611 [2024-06-09 09:14:09.929226] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.611 [2024-06-09 09:14:09.929321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.611 [2024-06-09 09:14:09.929338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.611 [2024-06-09 09:14:09.929345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.611 [2024-06-09 09:14:09.929351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:09.929367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:09.939284] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:09.939385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:09.939408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:09.939416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:09.939422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:09.939438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:09.949188] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:09.949308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:09.949325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:09.949333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:09.949339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:09.949354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:09.959295] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:09.959442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:09.959460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:09.959467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:09.959473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:09.959489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:09.969330] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:09.969427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:09.969444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:09.969452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:09.969458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:09.969474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:09.979416] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:09.979549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:09.979567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:09.979578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:09.979584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:09.979600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:09.989300] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:09.989400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:09.989423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:09.989431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:09.989437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:09.989453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:09.999414] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:09.999561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:09.999578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:09.999585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:09.999591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:09.999607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:10.009419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:10.009548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:10.009566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:10.009574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:10.009580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:10.009597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:10.019515] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:10.019641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:10.019659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:10.019667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:10.019673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:10.019689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:10.029517] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:10.029738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:10.029755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:10.029762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:10.029769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:10.029785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:10.039565] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:10.039669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:10.039686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:10.039695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:10.039701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:10.039718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:10.049538] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:10.049639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:10.049657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:10.049665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:10.049671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:10.049687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:10.059630] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.612 [2024-06-09 09:14:10.059730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.612 [2024-06-09 09:14:10.059746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.612 [2024-06-09 09:14:10.059754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.612 [2024-06-09 09:14:10.059760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.612 [2024-06-09 09:14:10.059776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.612 qpair failed and we were unable to recover it. 00:35:47.612 [2024-06-09 09:14:10.069600] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.069700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.069720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.069728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.069734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.069749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.079609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.079711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.079729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.079736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.079742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.079758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.089632] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.089725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.089742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.089750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.089756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.089771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.099698] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.099805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.099822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.099829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.099836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.099851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.109676] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.109774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.109791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.109798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.109805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.109825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.119705] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.119807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.119824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.119832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.119838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.119854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.129758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.129857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.129874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.129881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.129888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.129903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.139807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.139908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.139926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.139934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.139940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.139956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.149839] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.149947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.149965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.149972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.149978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.149994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.613 [2024-06-09 09:14:10.159844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.613 [2024-06-09 09:14:10.159954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.613 [2024-06-09 09:14:10.159984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.613 [2024-06-09 09:14:10.159994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.613 [2024-06-09 09:14:10.160000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.613 [2024-06-09 09:14:10.160021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.613 qpair failed and we were unable to recover it. 00:35:47.876 [2024-06-09 09:14:10.169866] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.876 [2024-06-09 09:14:10.169969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.876 [2024-06-09 09:14:10.169995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.876 [2024-06-09 09:14:10.170004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.876 [2024-06-09 09:14:10.170011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.876 [2024-06-09 09:14:10.170032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.876 qpair failed and we were unable to recover it. 00:35:47.876 [2024-06-09 09:14:10.179915] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.876 [2024-06-09 09:14:10.180021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.876 [2024-06-09 09:14:10.180046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.876 [2024-06-09 09:14:10.180055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.876 [2024-06-09 09:14:10.180061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.876 [2024-06-09 09:14:10.180082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.876 qpair failed and we were unable to recover it. 00:35:47.876 [2024-06-09 09:14:10.189919] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.876 [2024-06-09 09:14:10.190021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.876 [2024-06-09 09:14:10.190039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.876 [2024-06-09 09:14:10.190047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.876 [2024-06-09 09:14:10.190054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.876 [2024-06-09 09:14:10.190071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.876 qpair failed and we were unable to recover it. 00:35:47.876 [2024-06-09 09:14:10.199934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.876 [2024-06-09 09:14:10.200036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.876 [2024-06-09 09:14:10.200054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.876 [2024-06-09 09:14:10.200062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.876 [2024-06-09 09:14:10.200072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.876 [2024-06-09 09:14:10.200089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.876 qpair failed and we were unable to recover it. 00:35:47.876 [2024-06-09 09:14:10.209938] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.876 [2024-06-09 09:14:10.210037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.876 [2024-06-09 09:14:10.210055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.876 [2024-06-09 09:14:10.210062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.876 [2024-06-09 09:14:10.210068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.876 [2024-06-09 09:14:10.210084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.876 qpair failed and we were unable to recover it. 00:35:47.876 [2024-06-09 09:14:10.220065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.876 [2024-06-09 09:14:10.220218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.876 [2024-06-09 09:14:10.220235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.876 [2024-06-09 09:14:10.220243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.876 [2024-06-09 09:14:10.220249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.876 [2024-06-09 09:14:10.220264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.876 qpair failed and we were unable to recover it. 00:35:47.876 [2024-06-09 09:14:10.230027] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.876 [2024-06-09 09:14:10.230124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.230142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.230149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.230155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.230171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.240084] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.240187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.240205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.240212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.240218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.240234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.249951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.250076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.250102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.250111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.250118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.250139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.260031] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.260132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.260151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.260159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.260166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.260183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.270136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.270232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.270250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.270258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.270264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.270281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.280033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.280134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.280151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.280159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.280165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.280181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.290081] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.290188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.290206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.290214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.290225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.290241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.300259] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.300369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.300394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.300410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.300418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.300439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.310155] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.310255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.310274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.310282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.310288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.310305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.320312] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.320450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.320468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.320476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.320482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.320498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.330280] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.330379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.330397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.330411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.330417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.330433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.340339] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.340454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.340472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.340479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.340485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.340502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.350358] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.350497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.350515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.350523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.350529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.877 [2024-06-09 09:14:10.350545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.877 qpair failed and we were unable to recover it. 00:35:47.877 [2024-06-09 09:14:10.360330] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.877 [2024-06-09 09:14:10.360439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.877 [2024-06-09 09:14:10.360457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.877 [2024-06-09 09:14:10.360464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.877 [2024-06-09 09:14:10.360470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.878 [2024-06-09 09:14:10.360487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.878 qpair failed and we were unable to recover it. 00:35:47.878 [2024-06-09 09:14:10.370393] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.878 [2024-06-09 09:14:10.370522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.878 [2024-06-09 09:14:10.370540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.878 [2024-06-09 09:14:10.370548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.878 [2024-06-09 09:14:10.370555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.878 [2024-06-09 09:14:10.370573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.878 qpair failed and we were unable to recover it. 00:35:47.878 [2024-06-09 09:14:10.380419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.878 [2024-06-09 09:14:10.380521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.878 [2024-06-09 09:14:10.380539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.878 [2024-06-09 09:14:10.380550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.878 [2024-06-09 09:14:10.380556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.878 [2024-06-09 09:14:10.380573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.878 qpair failed and we were unable to recover it. 00:35:47.878 [2024-06-09 09:14:10.390509] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.878 [2024-06-09 09:14:10.390637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.878 [2024-06-09 09:14:10.390654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.878 [2024-06-09 09:14:10.390662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.878 [2024-06-09 09:14:10.390668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.878 [2024-06-09 09:14:10.390684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.878 qpair failed and we were unable to recover it. 00:35:47.878 [2024-06-09 09:14:10.400514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.878 [2024-06-09 09:14:10.400642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.878 [2024-06-09 09:14:10.400660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.878 [2024-06-09 09:14:10.400667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.878 [2024-06-09 09:14:10.400673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.878 [2024-06-09 09:14:10.400688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.878 qpair failed and we were unable to recover it. 00:35:47.878 [2024-06-09 09:14:10.410493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.878 [2024-06-09 09:14:10.410589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.878 [2024-06-09 09:14:10.410606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.878 [2024-06-09 09:14:10.410614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.878 [2024-06-09 09:14:10.410620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.878 [2024-06-09 09:14:10.410636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.878 qpair failed and we were unable to recover it. 00:35:47.878 [2024-06-09 09:14:10.420540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.878 [2024-06-09 09:14:10.420638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.878 [2024-06-09 09:14:10.420655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.878 [2024-06-09 09:14:10.420662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.878 [2024-06-09 09:14:10.420668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.878 [2024-06-09 09:14:10.420684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.878 qpair failed and we were unable to recover it. 00:35:47.878 [2024-06-09 09:14:10.430523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.878 [2024-06-09 09:14:10.430621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.878 [2024-06-09 09:14:10.430638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.878 [2024-06-09 09:14:10.430646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.878 [2024-06-09 09:14:10.430652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:47.878 [2024-06-09 09:14:10.430668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:47.878 qpair failed and we were unable to recover it. 00:35:48.141 [2024-06-09 09:14:10.440570] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.141 [2024-06-09 09:14:10.440675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.141 [2024-06-09 09:14:10.440693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.141 [2024-06-09 09:14:10.440700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.141 [2024-06-09 09:14:10.440706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.141 [2024-06-09 09:14:10.440723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.141 qpair failed and we were unable to recover it. 00:35:48.141 [2024-06-09 09:14:10.450592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.141 [2024-06-09 09:14:10.450692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.141 [2024-06-09 09:14:10.450710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.141 [2024-06-09 09:14:10.450718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.141 [2024-06-09 09:14:10.450724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.141 [2024-06-09 09:14:10.450740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.141 qpair failed and we were unable to recover it. 00:35:48.141 [2024-06-09 09:14:10.460658] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.141 [2024-06-09 09:14:10.460758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.141 [2024-06-09 09:14:10.460775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.141 [2024-06-09 09:14:10.460783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.141 [2024-06-09 09:14:10.460789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.141 [2024-06-09 09:14:10.460805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.141 qpair failed and we were unable to recover it. 00:35:48.141 [2024-06-09 09:14:10.470753] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.141 [2024-06-09 09:14:10.470850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.141 [2024-06-09 09:14:10.470870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.141 [2024-06-09 09:14:10.470878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.141 [2024-06-09 09:14:10.470884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.141 [2024-06-09 09:14:10.470900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.141 qpair failed and we were unable to recover it. 00:35:48.141 [2024-06-09 09:14:10.480696] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.141 [2024-06-09 09:14:10.480798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.141 [2024-06-09 09:14:10.480815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.141 [2024-06-09 09:14:10.480823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.141 [2024-06-09 09:14:10.480829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.141 [2024-06-09 09:14:10.480844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.141 qpair failed and we were unable to recover it. 00:35:48.141 [2024-06-09 09:14:10.490691] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.141 [2024-06-09 09:14:10.490792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.490818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.490827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.490834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.490854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.500755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.500863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.500882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.500890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.500897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.500915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.510767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.510872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.510890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.510897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.510904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.510925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.520789] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.520892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.520910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.520917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.520923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.520940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.530830] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.530928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.530945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.530953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.530959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.530974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.540927] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.541029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.541048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.541055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.541061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.541078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.550762] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.550862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.550879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.550887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.550893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.550909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.560807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.560919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.560940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.560947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.560953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.560969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.570970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.571073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.571090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.571098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.571104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.571119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.581139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.581248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.581273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.581283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.581289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.581310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.590995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.591100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.591125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.591134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.591141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.591162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.601013] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.601137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.601157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.601166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.601177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.601195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.611090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.142 [2024-06-09 09:14:10.611238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.142 [2024-06-09 09:14:10.611256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.142 [2024-06-09 09:14:10.611264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.142 [2024-06-09 09:14:10.611270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.142 [2024-06-09 09:14:10.611288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.142 qpair failed and we were unable to recover it. 00:35:48.142 [2024-06-09 09:14:10.621069] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.143 [2024-06-09 09:14:10.621174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.143 [2024-06-09 09:14:10.621191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.143 [2024-06-09 09:14:10.621199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.143 [2024-06-09 09:14:10.621205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.143 [2024-06-09 09:14:10.621221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.143 qpair failed and we were unable to recover it. 00:35:48.143 [2024-06-09 09:14:10.631101] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.143 [2024-06-09 09:14:10.631200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.143 [2024-06-09 09:14:10.631217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.143 [2024-06-09 09:14:10.631224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.143 [2024-06-09 09:14:10.631230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.143 [2024-06-09 09:14:10.631246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.143 qpair failed and we were unable to recover it. 00:35:48.143 [2024-06-09 09:14:10.641096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.143 [2024-06-09 09:14:10.641193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.143 [2024-06-09 09:14:10.641211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.143 [2024-06-09 09:14:10.641218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.143 [2024-06-09 09:14:10.641224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.143 [2024-06-09 09:14:10.641240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.143 qpair failed and we were unable to recover it. 00:35:48.143 [2024-06-09 09:14:10.651122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.143 [2024-06-09 09:14:10.651222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.143 [2024-06-09 09:14:10.651239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.143 [2024-06-09 09:14:10.651246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.143 [2024-06-09 09:14:10.651253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.143 [2024-06-09 09:14:10.651268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.143 qpair failed and we were unable to recover it. 00:35:48.143 [2024-06-09 09:14:10.661213] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.143 [2024-06-09 09:14:10.661313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.143 [2024-06-09 09:14:10.661330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.143 [2024-06-09 09:14:10.661338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.143 [2024-06-09 09:14:10.661344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.143 [2024-06-09 09:14:10.661360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.143 qpair failed and we were unable to recover it. 00:35:48.143 [2024-06-09 09:14:10.671178] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.143 [2024-06-09 09:14:10.671275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.143 [2024-06-09 09:14:10.671293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.143 [2024-06-09 09:14:10.671301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.143 [2024-06-09 09:14:10.671306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.143 [2024-06-09 09:14:10.671322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.143 qpair failed and we were unable to recover it. 00:35:48.143 [2024-06-09 09:14:10.681222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.143 [2024-06-09 09:14:10.681329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.143 [2024-06-09 09:14:10.681346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.143 [2024-06-09 09:14:10.681353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.143 [2024-06-09 09:14:10.681359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.143 [2024-06-09 09:14:10.681375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.143 qpair failed and we were unable to recover it. 00:35:48.143 [2024-06-09 09:14:10.691254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.143 [2024-06-09 09:14:10.691349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.143 [2024-06-09 09:14:10.691366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.143 [2024-06-09 09:14:10.691374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.143 [2024-06-09 09:14:10.691383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.143 [2024-06-09 09:14:10.691399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.143 qpair failed and we were unable to recover it. 00:35:48.406 [2024-06-09 09:14:10.701291] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.406 [2024-06-09 09:14:10.701391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.406 [2024-06-09 09:14:10.701414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.406 [2024-06-09 09:14:10.701422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.406 [2024-06-09 09:14:10.701428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.406 [2024-06-09 09:14:10.701444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.406 qpair failed and we were unable to recover it. 00:35:48.406 [2024-06-09 09:14:10.711276] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.406 [2024-06-09 09:14:10.711375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.406 [2024-06-09 09:14:10.711392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.406 [2024-06-09 09:14:10.711400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.406 [2024-06-09 09:14:10.711412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.406 [2024-06-09 09:14:10.711428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.406 qpair failed and we were unable to recover it. 00:35:48.406 [2024-06-09 09:14:10.721302] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.406 [2024-06-09 09:14:10.721413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.406 [2024-06-09 09:14:10.721430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.406 [2024-06-09 09:14:10.721438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.406 [2024-06-09 09:14:10.721444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.406 [2024-06-09 09:14:10.721460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.406 qpair failed and we were unable to recover it. 00:35:48.406 [2024-06-09 09:14:10.731331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.406 [2024-06-09 09:14:10.731431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.731449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.731456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.731462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.731478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.741426] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.741523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.741541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.741548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.741554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.741570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.751404] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.751508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.751525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.751532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.751538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.751554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.761455] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.761674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.761691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.761698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.761704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.761719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.771427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.771510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.771524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.771531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.771537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.771551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.781516] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.781653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.781671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.781682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.781688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.781704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.791414] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.791517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.791534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.791542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.791548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.791566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.801461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.801567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.801585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.801592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.801599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.801614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.811591] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.811726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.811743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.811751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.811757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.811772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.821642] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.821741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.821758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.821766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.821772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.821788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.831636] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.831736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.831754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.831762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.831768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.831784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.841650] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.841750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.841767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.841774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.841781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.841796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.851652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.851749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.851766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.851774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.851780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.407 [2024-06-09 09:14:10.851796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.407 qpair failed and we were unable to recover it. 00:35:48.407 [2024-06-09 09:14:10.861755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.407 [2024-06-09 09:14:10.861850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.407 [2024-06-09 09:14:10.861867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.407 [2024-06-09 09:14:10.861875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.407 [2024-06-09 09:14:10.861881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.861896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.871779] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.871873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.871894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.871901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.871907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.871923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.881748] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.881848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.881865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.881873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.881879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.881894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.891792] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.891939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.891956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.891964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.891970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.891985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.901879] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.901980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.901998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.902006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.902012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.902027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.911867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.911966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.911983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.911991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.911997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.912016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.921882] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.922010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.922027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.922035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.922041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.922056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.931892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.932001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.932027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.932036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.932043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.932063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.941997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.942107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.942133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.942142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.942149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.942171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.951960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.952065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.952090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.952099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.952106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.952126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.408 [2024-06-09 09:14:10.961989] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.408 [2024-06-09 09:14:10.962096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.408 [2024-06-09 09:14:10.962127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.408 [2024-06-09 09:14:10.962136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.408 [2024-06-09 09:14:10.962142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.408 [2024-06-09 09:14:10.962163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.408 qpair failed and we were unable to recover it. 00:35:48.671 [2024-06-09 09:14:10.972017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.671 [2024-06-09 09:14:10.972122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.671 [2024-06-09 09:14:10.972148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.671 [2024-06-09 09:14:10.972157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.671 [2024-06-09 09:14:10.972164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.671 [2024-06-09 09:14:10.972184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.671 qpair failed and we were unable to recover it. 00:35:48.671 [2024-06-09 09:14:10.982092] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.671 [2024-06-09 09:14:10.982205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.671 [2024-06-09 09:14:10.982230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.671 [2024-06-09 09:14:10.982240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.671 [2024-06-09 09:14:10.982247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.671 [2024-06-09 09:14:10.982267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.671 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:10.992136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:10.992279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:10.992299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:10.992308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:10.992315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:10.992332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.002089] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.002200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.002217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.002225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.002231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.002253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.012133] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.012360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.012377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.012385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.012391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.012412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.022187] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.022292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.022309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.022317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.022323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.022339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.032218] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.032360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.032377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.032385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.032391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.032413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.042104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.042221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.042238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.042246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.042252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.042268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.052238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.052337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.052354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.052362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.052368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.052383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.062297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.062392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.062416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.062424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.062430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.062446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.072261] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.072357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.072374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.072381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.072388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.072408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.082193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.082293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.082310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.082317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.082323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.082339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.092215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.092317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.092335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.092342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.092356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.092372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.102431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.102535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.102552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.102560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.102566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.102582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.112382] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.112482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.672 [2024-06-09 09:14:11.112499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.672 [2024-06-09 09:14:11.112507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.672 [2024-06-09 09:14:11.112512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.672 [2024-06-09 09:14:11.112529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.672 qpair failed and we were unable to recover it. 00:35:48.672 [2024-06-09 09:14:11.122419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.672 [2024-06-09 09:14:11.122541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.122559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.122566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.122572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.122588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.132450] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.132543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.132560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.132567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.132573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.132590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.142499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.142607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.142624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.142632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.142638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.142654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.152500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.152597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.152614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.152622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.152628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.152643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.162531] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.162640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.162657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.162665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.162671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.162686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.172551] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.172649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.172666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.172674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.172680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.172695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.182616] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.182727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.182743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.182755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.182761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.182777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.192624] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.192729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.192746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.192754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.192760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.192776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.202620] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.202719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.202736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.202743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.202749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.202764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.212652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.212757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.212775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.212782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.212788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.212805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.673 [2024-06-09 09:14:11.222726] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.673 [2024-06-09 09:14:11.222831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.673 [2024-06-09 09:14:11.222849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.673 [2024-06-09 09:14:11.222856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.673 [2024-06-09 09:14:11.222862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.673 [2024-06-09 09:14:11.222878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.673 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.232706] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.232815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.232832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.232839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.232845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.232860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.242723] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.242826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.242843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.242850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.242856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.242872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.252773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.252873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.252891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.252898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.252904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.252919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.262846] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.262975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.262992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.262999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.263005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.263021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.272833] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.272939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.272964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.272979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.272986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.273006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.282979] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.283097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.283122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.283131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.283138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.283159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.292862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.292961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.292987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.292996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.293003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.293023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.302922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.303034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.303060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.303069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.303076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.303097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.312932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.313035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.313060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.313069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.313076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.313097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.322939] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.323043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.323062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.323069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.323076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.323093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.332886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.936 [2024-06-09 09:14:11.332987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.936 [2024-06-09 09:14:11.333013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.936 [2024-06-09 09:14:11.333022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.936 [2024-06-09 09:14:11.333030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.936 [2024-06-09 09:14:11.333050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.936 qpair failed and we were unable to recover it. 00:35:48.936 [2024-06-09 09:14:11.343044] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.343141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.343159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.343167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.343174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.343191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.352961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.353068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.353087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.353095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.353101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.353119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.362972] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.363073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.363097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.363104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.363110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.363127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.373095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.373326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.373359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.373369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.373376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.373396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.383027] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.383127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.383145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.383153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.383159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.383176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.393142] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.393240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.393258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.393265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.393272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.393288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.403045] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.403149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.403167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.403174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.403180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.403201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.413122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.413218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.413235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.413243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.413249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.413265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.423211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.423315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.423333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.423340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.423347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.423362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.433272] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.433371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.433389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.433398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.433410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.433426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.443295] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.443400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.443425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.443433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.443442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.443460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.453309] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.453409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.453431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.453439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.453445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.453461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.463384] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.463489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.463506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.463513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.463519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.463535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.473372] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.473473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.473490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.473498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.473503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.473519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:48.937 [2024-06-09 09:14:11.483396] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.937 [2024-06-09 09:14:11.483497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.937 [2024-06-09 09:14:11.483514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.937 [2024-06-09 09:14:11.483521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.937 [2024-06-09 09:14:11.483527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:48.937 [2024-06-09 09:14:11.483543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.937 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.493400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.493504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.493521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.493529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.493539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.493555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.503490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.503586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.503603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.503610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.503617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.503633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.513457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.513555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.513572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.513580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.513586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.513602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.523494] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.523598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.523616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.523623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.523630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.523645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.533426] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.533527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.533544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.533552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.533558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.533574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.543603] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.543704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.543721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.543729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.543735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.543751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.553585] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.553813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.553838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.553845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.553852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.553867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.563583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.563688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.563705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.563712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.563718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.563734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.573615] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.573711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.573728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.573736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.573742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.573757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.583564] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.583662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.583679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.583690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.583696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.583712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.593706] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.593844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.593861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.593869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.593875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.593890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.603720] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.603822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.603839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.603847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.603853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.603868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.613740] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.613837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.613855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.613862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.613869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.613884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.623830] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.623968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.623985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.623992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.623998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.624013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.633835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.633943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.633968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.633978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.633986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.634007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.643827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.643931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.643949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.643957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.643963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.643980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.653844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.653948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.653974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.653983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.653989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.654010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.663894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.663996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.664015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.664022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.664028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.664045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.673903] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.674006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.674032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.674045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.674053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.674073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.683938] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.684066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.684091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.684100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.684106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.684127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.693959] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.694065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.694090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.694099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.694106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.200 [2024-06-09 09:14:11.694126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-06-09 09:14:11.704021] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.200 [2024-06-09 09:14:11.704127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.200 [2024-06-09 09:14:11.704152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.200 [2024-06-09 09:14:11.704162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.200 [2024-06-09 09:14:11.704168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.201 [2024-06-09 09:14:11.704188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-06-09 09:14:11.713994] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.201 [2024-06-09 09:14:11.714095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.201 [2024-06-09 09:14:11.714120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.201 [2024-06-09 09:14:11.714129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.201 [2024-06-09 09:14:11.714136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.201 [2024-06-09 09:14:11.714157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-06-09 09:14:11.724018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.201 [2024-06-09 09:14:11.724129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.201 [2024-06-09 09:14:11.724154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.201 [2024-06-09 09:14:11.724164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.201 [2024-06-09 09:14:11.724170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.201 [2024-06-09 09:14:11.724192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-06-09 09:14:11.733948] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.201 [2024-06-09 09:14:11.734048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.201 [2024-06-09 09:14:11.734073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.201 [2024-06-09 09:14:11.734082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.201 [2024-06-09 09:14:11.734089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.201 [2024-06-09 09:14:11.734111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-06-09 09:14:11.744017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.201 [2024-06-09 09:14:11.744126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.201 [2024-06-09 09:14:11.744152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.201 [2024-06-09 09:14:11.744161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.201 [2024-06-09 09:14:11.744168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.201 [2024-06-09 09:14:11.744189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-06-09 09:14:11.754105] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.201 [2024-06-09 09:14:11.754200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.201 [2024-06-09 09:14:11.754218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.201 [2024-06-09 09:14:11.754226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.201 [2024-06-09 09:14:11.754232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.201 [2024-06-09 09:14:11.754249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.463 [2024-06-09 09:14:11.764118] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.463 [2024-06-09 09:14:11.764229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.463 [2024-06-09 09:14:11.764259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.463 [2024-06-09 09:14:11.764269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.463 [2024-06-09 09:14:11.764275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.463 [2024-06-09 09:14:11.764296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.463 qpair failed and we were unable to recover it. 00:35:49.463 [2024-06-09 09:14:11.774173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.463 [2024-06-09 09:14:11.774286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.463 [2024-06-09 09:14:11.774304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.463 [2024-06-09 09:14:11.774312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.463 [2024-06-09 09:14:11.774318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.463 [2024-06-09 09:14:11.774335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.463 qpair failed and we were unable to recover it. 00:35:49.463 [2024-06-09 09:14:11.784222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.463 [2024-06-09 09:14:11.784317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.463 [2024-06-09 09:14:11.784335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.463 [2024-06-09 09:14:11.784343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.463 [2024-06-09 09:14:11.784349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.463 [2024-06-09 09:14:11.784366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.463 qpair failed and we were unable to recover it. 00:35:49.463 [2024-06-09 09:14:11.794233] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.463 [2024-06-09 09:14:11.794327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.463 [2024-06-09 09:14:11.794345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.463 [2024-06-09 09:14:11.794352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.463 [2024-06-09 09:14:11.794358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.463 [2024-06-09 09:14:11.794374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.463 qpair failed and we were unable to recover it. 00:35:49.463 [2024-06-09 09:14:11.804236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.463 [2024-06-09 09:14:11.804338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.463 [2024-06-09 09:14:11.804355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.463 [2024-06-09 09:14:11.804362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.463 [2024-06-09 09:14:11.804368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.463 [2024-06-09 09:14:11.804392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.463 qpair failed and we were unable to recover it. 00:35:49.463 [2024-06-09 09:14:11.814284] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.463 [2024-06-09 09:14:11.814511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.463 [2024-06-09 09:14:11.814529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.463 [2024-06-09 09:14:11.814536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.463 [2024-06-09 09:14:11.814542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.463 [2024-06-09 09:14:11.814557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.463 qpair failed and we were unable to recover it. 00:35:49.463 [2024-06-09 09:14:11.824354] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.463 [2024-06-09 09:14:11.824468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.463 [2024-06-09 09:14:11.824485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.463 [2024-06-09 09:14:11.824493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.463 [2024-06-09 09:14:11.824499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.463 [2024-06-09 09:14:11.824515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.463 qpair failed and we were unable to recover it. 00:35:49.463 [2024-06-09 09:14:11.834326] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.463 [2024-06-09 09:14:11.834426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.463 [2024-06-09 09:14:11.834444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.463 [2024-06-09 09:14:11.834451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.463 [2024-06-09 09:14:11.834457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.463 [2024-06-09 09:14:11.834473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.463 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.844396] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.844499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.844516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.844523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.844529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.844545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.854374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.854471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.854493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.854500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.854506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.854522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.864521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.864636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.864654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.864661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.864667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.864683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.874450] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.874545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.874562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.874570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.874575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.874591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.884470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.884603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.884620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.884627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.884633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.884649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.894501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.894596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.894613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.894620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.894629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.894646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.904518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.904619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.904636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.904644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.904650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.904665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.914540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.914635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.914653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.914660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.914666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.914681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.924583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.924690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.924707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.924715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.924721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.924737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.934619] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.934697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.934711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.934718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.934724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.934738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.944676] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.944824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.944840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.944848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.944854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.944869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.954681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.954790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.954807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.954814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.954820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.954835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.464 qpair failed and we were unable to recover it. 00:35:49.464 [2024-06-09 09:14:11.964629] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.464 [2024-06-09 09:14:11.964858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.464 [2024-06-09 09:14:11.964875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.464 [2024-06-09 09:14:11.964882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.464 [2024-06-09 09:14:11.964888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.464 [2024-06-09 09:14:11.964903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.465 qpair failed and we were unable to recover it. 00:35:49.465 [2024-06-09 09:14:11.974727] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.465 [2024-06-09 09:14:11.974953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.465 [2024-06-09 09:14:11.974970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.465 [2024-06-09 09:14:11.974977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.465 [2024-06-09 09:14:11.974983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.465 [2024-06-09 09:14:11.974997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.465 qpair failed and we were unable to recover it. 00:35:49.465 [2024-06-09 09:14:11.984814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.465 [2024-06-09 09:14:11.984926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.465 [2024-06-09 09:14:11.984943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.465 [2024-06-09 09:14:11.984951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.465 [2024-06-09 09:14:11.984960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.465 [2024-06-09 09:14:11.984976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.465 qpair failed and we were unable to recover it. 00:35:49.465 [2024-06-09 09:14:11.994687] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.465 [2024-06-09 09:14:11.994792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.465 [2024-06-09 09:14:11.994809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.465 [2024-06-09 09:14:11.994817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.465 [2024-06-09 09:14:11.994823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.465 [2024-06-09 09:14:11.994839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.465 qpair failed and we were unable to recover it. 00:35:49.465 [2024-06-09 09:14:12.004801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.465 [2024-06-09 09:14:12.004901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.465 [2024-06-09 09:14:12.004918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.465 [2024-06-09 09:14:12.004925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.465 [2024-06-09 09:14:12.004931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.465 [2024-06-09 09:14:12.004947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.465 qpair failed and we were unable to recover it. 00:35:49.465 [2024-06-09 09:14:12.014823] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.465 [2024-06-09 09:14:12.014916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.465 [2024-06-09 09:14:12.014933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.465 [2024-06-09 09:14:12.014940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.465 [2024-06-09 09:14:12.014947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.465 [2024-06-09 09:14:12.014962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.465 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.024815] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.024917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.024935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.024942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.024948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.024965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.034866] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.034969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.034995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.035004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.035011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.035032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.044893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.044999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.045017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.045025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.045031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.045048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.054913] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.055013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.055038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.055047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.055054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.055076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.064988] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.065098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.065123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.065132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.065139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.065160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.074900] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.075132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.075157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.075171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.075178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.075198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.085049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.085146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.085171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.085179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.085186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.085206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.095037] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.095167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.095193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.095202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.095208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.095229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.105133] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.105255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.105275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.105284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.105292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.105312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.115111] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.728 [2024-06-09 09:14:12.115247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.728 [2024-06-09 09:14:12.115265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.728 [2024-06-09 09:14:12.115273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.728 [2024-06-09 09:14:12.115279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.728 [2024-06-09 09:14:12.115296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.728 qpair failed and we were unable to recover it. 00:35:49.728 [2024-06-09 09:14:12.125184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.125287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.125304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.125312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.125319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.125335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.135129] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.135227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.135244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.135252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.135258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.135274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.145204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.145304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.145321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.145329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.145336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.145352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.155191] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.155298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.155314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.155322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.155328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.155344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.165245] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.165374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.165395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.165408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.165414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.165430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.175256] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.175350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.175367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.175375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.175381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.175397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.185327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.185434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.185452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.185460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.185466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.185482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.195316] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.195421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.195439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.195447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.195453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.195469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.205315] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.205424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.205442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.205449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.205456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.205475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.215357] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.215494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.215512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.215520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.215527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.215542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.225410] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.225513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.225531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.225538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.225544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.225560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.235379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.235479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.235496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.235504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.235510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.235525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.245432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.245535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.245553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.245560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.729 [2024-06-09 09:14:12.245566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.729 [2024-06-09 09:14:12.245581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.729 qpair failed and we were unable to recover it. 00:35:49.729 [2024-06-09 09:14:12.255465] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.729 [2024-06-09 09:14:12.255571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.729 [2024-06-09 09:14:12.255592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.729 [2024-06-09 09:14:12.255600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.730 [2024-06-09 09:14:12.255606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.730 [2024-06-09 09:14:12.255621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.730 qpair failed and we were unable to recover it. 00:35:49.730 [2024-06-09 09:14:12.265676] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.730 [2024-06-09 09:14:12.265776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.730 [2024-06-09 09:14:12.265794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.730 [2024-06-09 09:14:12.265801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.730 [2024-06-09 09:14:12.265807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.730 [2024-06-09 09:14:12.265823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.730 qpair failed and we were unable to recover it. 00:35:49.730 [2024-06-09 09:14:12.275517] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.730 [2024-06-09 09:14:12.275659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.730 [2024-06-09 09:14:12.275676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.730 [2024-06-09 09:14:12.275683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.730 [2024-06-09 09:14:12.275689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.730 [2024-06-09 09:14:12.275705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.730 qpair failed and we were unable to recover it. 00:35:49.992 [2024-06-09 09:14:12.285552] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.992 [2024-06-09 09:14:12.285659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.992 [2024-06-09 09:14:12.285677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.992 [2024-06-09 09:14:12.285684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.992 [2024-06-09 09:14:12.285690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.992 [2024-06-09 09:14:12.285706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.992 qpair failed and we were unable to recover it. 00:35:49.992 [2024-06-09 09:14:12.295463] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.992 [2024-06-09 09:14:12.295568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.992 [2024-06-09 09:14:12.295585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.992 [2024-06-09 09:14:12.295592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.992 [2024-06-09 09:14:12.295602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.992 [2024-06-09 09:14:12.295617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.992 qpair failed and we were unable to recover it. 00:35:49.992 [2024-06-09 09:14:12.305662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.992 [2024-06-09 09:14:12.305766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.992 [2024-06-09 09:14:12.305783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.992 [2024-06-09 09:14:12.305791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.992 [2024-06-09 09:14:12.305797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.992 [2024-06-09 09:14:12.305812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.992 qpair failed and we were unable to recover it. 00:35:49.992 [2024-06-09 09:14:12.315655] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.992 [2024-06-09 09:14:12.315752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.992 [2024-06-09 09:14:12.315770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.992 [2024-06-09 09:14:12.315777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.992 [2024-06-09 09:14:12.315783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.992 [2024-06-09 09:14:12.315798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.992 qpair failed and we were unable to recover it. 00:35:49.992 [2024-06-09 09:14:12.325634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.992 [2024-06-09 09:14:12.325731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.992 [2024-06-09 09:14:12.325748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.992 [2024-06-09 09:14:12.325755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.992 [2024-06-09 09:14:12.325761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df0000b90 00:35:49.992 [2024-06-09 09:14:12.325777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.992 qpair failed and we were unable to recover it. 00:35:49.992 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Read completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 Write completed with error (sct=0, sc=8) 00:35:49.993 starting I/O failed 00:35:49.993 [2024-06-09 09:14:12.326102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.993 [2024-06-09 09:14:12.335666] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.993 [2024-06-09 09:14:12.335747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.993 [2024-06-09 09:14:12.335764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.993 [2024-06-09 09:14:12.335770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.993 [2024-06-09 09:14:12.335774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df8000b90 00:35:49.993 [2024-06-09 09:14:12.335788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.993 qpair failed and we were unable to recover it. 00:35:49.993 [2024-06-09 09:14:12.345817] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.993 [2024-06-09 09:14:12.345941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.993 [2024-06-09 09:14:12.345954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.993 [2024-06-09 09:14:12.345960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.993 [2024-06-09 09:14:12.345965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6df8000b90 00:35:49.993 [2024-06-09 09:14:12.345978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.993 qpair failed and we were unable to recover it. 00:35:49.993 [2024-06-09 09:14:12.346190] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:49.993 A controller has encountered a failure and is being reset. 00:35:49.993 Controller properly reset. 00:35:49.993 Initializing NVMe Controllers 00:35:49.993 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:49.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:49.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:49.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:49.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:49.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:49.993 Initialization complete. Launching workers. 00:35:49.993 Starting thread on core 1 00:35:49.993 Starting thread on core 2 00:35:49.993 Starting thread on core 3 00:35:49.993 Starting thread on core 0 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:49.993 00:35:49.993 real 0m11.307s 00:35:49.993 user 0m20.474s 00:35:49.993 sys 0m4.539s 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.993 ************************************ 00:35:49.993 END TEST nvmf_target_disconnect_tc2 00:35:49.993 ************************************ 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:49.993 rmmod nvme_tcp 00:35:49.993 rmmod nvme_fabrics 00:35:49.993 rmmod nvme_keyring 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2847704 ']' 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2847704 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2847704 ']' 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 2847704 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:49.993 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2847704 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2847704' 00:35:50.255 killing process with pid 2847704 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 2847704 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 2847704 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:50.255 09:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.829 09:14:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:52.829 00:35:52.829 real 0m20.898s 00:35:52.829 user 0m47.845s 00:35:52.829 sys 0m10.028s 00:35:52.829 09:14:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:52.829 09:14:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:52.829 ************************************ 00:35:52.829 END TEST nvmf_target_disconnect 00:35:52.829 ************************************ 00:35:52.829 09:14:14 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:35:52.829 09:14:14 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:52.829 09:14:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.829 09:14:14 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:52.829 00:35:52.829 real 29m16.058s 00:35:52.829 user 74m17.254s 00:35:52.829 sys 7m43.271s 00:35:52.829 09:14:14 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:52.829 09:14:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.829 ************************************ 00:35:52.829 END TEST nvmf_tcp 00:35:52.829 ************************************ 00:35:52.829 09:14:14 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:35:52.829 09:14:14 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:52.829 09:14:14 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:52.829 09:14:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:52.829 09:14:14 -- common/autotest_common.sh@10 -- # set +x 00:35:52.829 ************************************ 00:35:52.829 START TEST spdkcli_nvmf_tcp 00:35:52.829 ************************************ 00:35:52.829 09:14:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:52.829 * Looking for test storage... 00:35:52.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2849526 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2849526 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 2849526 ']' 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:52.829 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.830 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:52.830 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.830 [2024-06-09 09:14:15.122422] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:52.830 [2024-06-09 09:14:15.122471] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849526 ] 00:35:52.830 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.830 [2024-06-09 09:14:15.180150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:52.830 [2024-06-09 09:14:15.245336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.830 [2024-06-09 09:14:15.245339] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.417 09:14:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:53.417 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:53.417 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:53.417 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:53.417 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:53.417 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:53.417 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:53.417 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:53.417 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:53.417 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:53.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:53.417 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:53.417 ' 00:35:55.962 [2024-06-09 09:14:18.254117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.902 [2024-06-09 09:14:19.417851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:59.445 [2024-06-09 09:14:21.556060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:01.356 [2024-06-09 09:14:23.389410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:02.299 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:02.299 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:02.299 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:02.299 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:02.299 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:02.299 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:02.299 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:02.299 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:02.299 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:02.299 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:02.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:02.299 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:02.560 09:14:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:02.560 09:14:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:02.560 09:14:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.560 09:14:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:02.560 09:14:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:02.560 09:14:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.560 09:14:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:02.560 09:14:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:02.820 09:14:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:02.820 09:14:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:02.820 09:14:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:02.820 09:14:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:02.820 09:14:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.820 09:14:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:02.820 09:14:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:02.820 09:14:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.080 09:14:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:03.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:03.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:03.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:03.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:03.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:03.080 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:03.080 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:03.080 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:03.080 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:03.080 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:03.080 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:03.080 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:03.080 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:03.080 ' 00:36:08.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:08.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:08.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:08.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:08.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:08.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:08.370 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:08.370 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:08.370 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:08.370 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:08.370 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:08.370 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:08.370 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:08.370 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2849526 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 2849526 ']' 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 2849526 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2849526 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2849526' 00:36:08.370 killing process with pid 2849526 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 2849526 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 2849526 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2849526 ']' 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2849526 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 2849526 ']' 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 2849526 00:36:08.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2849526) - No such process 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 2849526 is not found' 00:36:08.370 Process with pid 2849526 is not found 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:08.370 00:36:08.370 real 0m15.535s 00:36:08.370 user 0m31.969s 00:36:08.370 sys 0m0.718s 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:08.370 09:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:08.370 ************************************ 00:36:08.370 END TEST spdkcli_nvmf_tcp 00:36:08.370 ************************************ 00:36:08.370 09:14:30 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:08.370 09:14:30 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:08.370 09:14:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:08.370 09:14:30 -- common/autotest_common.sh@10 -- # set +x 00:36:08.370 ************************************ 00:36:08.370 START TEST nvmf_identify_passthru 00:36:08.370 ************************************ 00:36:08.370 09:14:30 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:08.370 * Looking for test storage... 00:36:08.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:08.370 09:14:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.370 09:14:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.370 09:14:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.370 09:14:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:08.370 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:08.370 09:14:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.370 09:14:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.370 09:14:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.370 09:14:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.370 09:14:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:08.371 09:14:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.371 09:14:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.371 09:14:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:08.371 09:14:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:08.371 09:14:30 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:08.371 09:14:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:14.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:14.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:14.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:14.964 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:14.965 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:14.965 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:15.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:36:15.226 00:36:15.226 --- 10.0.0.2 ping statistics --- 00:36:15.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.226 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:36:15.226 00:36:15.226 --- 10.0.0.1 ping statistics --- 00:36:15.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.226 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:15.226 09:14:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:15.226 09:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:15.226 09:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:36:15.226 09:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:36:15.487 09:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:15.487 09:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:15.487 09:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:15.487 09:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:15.487 09:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:15.487 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.748 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:15.748 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:15.748 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:15.748 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:15.748 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.319 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:16.319 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:16.319 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:16.319 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2856283 00:36:16.319 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:16.319 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:16.319 09:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2856283 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 2856283 ']' 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:16.319 09:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:16.319 [2024-06-09 09:14:38.810277] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:36:16.319 [2024-06-09 09:14:38.810327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.319 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.319 [2024-06-09 09:14:38.874185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:16.581 [2024-06-09 09:14:38.940085] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.581 [2024-06-09 09:14:38.940120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.581 [2024-06-09 09:14:38.940128] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.581 [2024-06-09 09:14:38.940134] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.581 [2024-06-09 09:14:38.940139] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.581 [2024-06-09 09:14:38.940272] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.581 [2024-06-09 09:14:38.940397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:36:16.581 [2024-06-09 09:14:38.940553] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:36:16.581 [2024-06-09 09:14:38.940647] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.153 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:36:17.154 09:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.154 INFO: Log level set to 20 00:36:17.154 INFO: Requests: 00:36:17.154 { 00:36:17.154 "jsonrpc": "2.0", 00:36:17.154 "method": "nvmf_set_config", 00:36:17.154 "id": 1, 00:36:17.154 "params": { 00:36:17.154 "admin_cmd_passthru": { 00:36:17.154 "identify_ctrlr": true 00:36:17.154 } 00:36:17.154 } 00:36:17.154 } 00:36:17.154 00:36:17.154 INFO: response: 00:36:17.154 { 00:36:17.154 "jsonrpc": "2.0", 00:36:17.154 "id": 1, 00:36:17.154 "result": true 00:36:17.154 } 00:36:17.154 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.154 09:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.154 INFO: Setting log level to 20 00:36:17.154 INFO: Setting log level to 20 00:36:17.154 INFO: Log level set to 20 00:36:17.154 INFO: Log level set to 20 00:36:17.154 INFO: Requests: 00:36:17.154 { 00:36:17.154 "jsonrpc": "2.0", 00:36:17.154 "method": "framework_start_init", 00:36:17.154 "id": 1 00:36:17.154 } 00:36:17.154 00:36:17.154 INFO: Requests: 00:36:17.154 { 00:36:17.154 "jsonrpc": "2.0", 00:36:17.154 "method": "framework_start_init", 00:36:17.154 "id": 1 00:36:17.154 } 00:36:17.154 00:36:17.154 [2024-06-09 09:14:39.664831] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:17.154 INFO: response: 00:36:17.154 { 00:36:17.154 "jsonrpc": "2.0", 00:36:17.154 "id": 1, 00:36:17.154 "result": true 00:36:17.154 } 00:36:17.154 00:36:17.154 INFO: response: 00:36:17.154 { 00:36:17.154 "jsonrpc": "2.0", 00:36:17.154 "id": 1, 00:36:17.154 "result": true 00:36:17.154 } 00:36:17.154 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.154 09:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.154 INFO: Setting log level to 40 00:36:17.154 INFO: Setting log level to 40 00:36:17.154 INFO: Setting log level to 40 00:36:17.154 [2024-06-09 09:14:39.678090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.154 09:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:17.154 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.415 09:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:17.415 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.415 09:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.676 Nvme0n1 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.676 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.676 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.676 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.676 [2024-06-09 09:14:40.061826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.676 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.676 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.676 [ 00:36:17.676 { 00:36:17.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:17.676 "subtype": "Discovery", 00:36:17.676 "listen_addresses": [], 00:36:17.676 "allow_any_host": true, 00:36:17.676 "hosts": [] 00:36:17.676 }, 00:36:17.676 { 00:36:17.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:17.676 "subtype": "NVMe", 00:36:17.676 "listen_addresses": [ 00:36:17.676 { 00:36:17.676 "trtype": "TCP", 00:36:17.677 "adrfam": "IPv4", 00:36:17.677 "traddr": "10.0.0.2", 00:36:17.677 "trsvcid": "4420" 00:36:17.677 } 00:36:17.677 ], 00:36:17.677 "allow_any_host": true, 00:36:17.677 "hosts": [], 00:36:17.677 "serial_number": "SPDK00000000000001", 00:36:17.677 "model_number": "SPDK bdev Controller", 00:36:17.677 "max_namespaces": 1, 00:36:17.677 "min_cntlid": 1, 00:36:17.677 "max_cntlid": 65519, 00:36:17.677 "namespaces": [ 00:36:17.677 { 00:36:17.677 "nsid": 1, 00:36:17.677 "bdev_name": "Nvme0n1", 00:36:17.677 "name": "Nvme0n1", 00:36:17.677 "nguid": "3634473052605487002538450000003E", 00:36:17.677 "uuid": "36344730-5260-5487-0025-38450000003e" 00:36:17.677 } 00:36:17.677 ] 00:36:17.677 } 00:36:17.677 ] 00:36:17.677 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.677 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:17.677 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:17.677 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:17.677 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.937 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:17.937 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:17.937 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:17.937 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:17.937 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.937 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:17.937 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:17.937 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:17.937 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:17.937 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.937 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.197 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:18.197 09:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:18.197 rmmod nvme_tcp 00:36:18.197 rmmod nvme_fabrics 00:36:18.197 rmmod nvme_keyring 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2856283 ']' 00:36:18.197 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2856283 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 2856283 ']' 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 2856283 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2856283 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2856283' 00:36:18.197 killing process with pid 2856283 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 2856283 00:36:18.197 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 2856283 00:36:18.457 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:18.457 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:18.457 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:18.457 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:18.457 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:18.457 09:14:40 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.457 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:18.457 09:14:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.012 09:14:42 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:21.012 00:36:21.012 real 0m12.416s 00:36:21.012 user 0m10.106s 00:36:21.012 sys 0m5.873s 00:36:21.012 09:14:42 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:21.012 09:14:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:21.012 ************************************ 00:36:21.012 END TEST nvmf_identify_passthru 00:36:21.012 ************************************ 00:36:21.012 09:14:43 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:21.012 09:14:43 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:36:21.012 09:14:43 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:21.012 09:14:43 -- common/autotest_common.sh@10 -- # set +x 00:36:21.012 ************************************ 00:36:21.012 START TEST nvmf_dif 00:36:21.012 ************************************ 00:36:21.012 09:14:43 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:21.012 * Looking for test storage... 00:36:21.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:21.012 09:14:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.012 09:14:43 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.012 09:14:43 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.012 09:14:43 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.012 09:14:43 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.012 09:14:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.012 09:14:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.012 09:14:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.013 09:14:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:21.013 09:14:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:21.013 09:14:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:21.013 09:14:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:21.013 09:14:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:21.013 09:14:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:21.013 09:14:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.013 09:14:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:21.013 09:14:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:21.013 09:14:43 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:36:21.013 09:14:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:27.605 09:14:49 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:27.606 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:27.606 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:27.606 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:27.606 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:27.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:27.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:36:27.606 00:36:27.606 --- 10.0.0.2 ping statistics --- 00:36:27.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.606 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:27.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:27.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:36:27.606 00:36:27.606 --- 10.0.0.1 ping statistics --- 00:36:27.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.606 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:27.606 09:14:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:30.907 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:30.907 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:30.907 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:31.168 09:14:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:31.168 09:14:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:31.168 09:14:53 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:31.168 09:14:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2862122 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2862122 00:36:31.168 09:14:53 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 2862122 ']' 00:36:31.168 09:14:53 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.168 09:14:53 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:31.168 09:14:53 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.168 09:14:53 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:31.168 09:14:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:31.168 09:14:53 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:31.168 [2024-06-09 09:14:53.569806] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:36:31.168 [2024-06-09 09:14:53.569854] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.168 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.168 [2024-06-09 09:14:53.633105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.168 [2024-06-09 09:14:53.696724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.168 [2024-06-09 09:14:53.696758] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.168 [2024-06-09 09:14:53.696765] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.168 [2024-06-09 09:14:53.696772] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.168 [2024-06-09 09:14:53.696777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.168 [2024-06-09 09:14:53.696795] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:36:32.109 09:14:54 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.109 09:14:54 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.109 09:14:54 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:32.109 09:14:54 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.109 [2024-06-09 09:14:54.367342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.109 09:14:54 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:32.109 09:14:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.109 ************************************ 00:36:32.109 START TEST fio_dif_1_default 00:36:32.109 ************************************ 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:32.109 bdev_null0 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:32.109 [2024-06-09 09:14:54.435649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:32.109 { 00:36:32.109 "params": { 00:36:32.109 "name": "Nvme$subsystem", 00:36:32.109 "trtype": "$TEST_TRANSPORT", 00:36:32.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.109 "adrfam": "ipv4", 00:36:32.109 "trsvcid": "$NVMF_PORT", 00:36:32.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.109 "hdgst": ${hdgst:-false}, 00:36:32.109 "ddgst": ${ddgst:-false} 00:36:32.109 }, 00:36:32.109 "method": "bdev_nvme_attach_controller" 00:36:32.109 } 00:36:32.109 EOF 00:36:32.109 )") 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:32.109 "params": { 00:36:32.109 "name": "Nvme0", 00:36:32.109 "trtype": "tcp", 00:36:32.109 "traddr": "10.0.0.2", 00:36:32.109 "adrfam": "ipv4", 00:36:32.109 "trsvcid": "4420", 00:36:32.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.109 "hdgst": false, 00:36:32.109 "ddgst": false 00:36:32.109 }, 00:36:32.109 "method": "bdev_nvme_attach_controller" 00:36:32.109 }' 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:32.109 09:14:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.368 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:32.368 fio-3.35 00:36:32.368 Starting 1 thread 00:36:32.368 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.605 00:36:44.605 filename0: (groupid=0, jobs=1): err= 0: pid=2862652: Sun Jun 9 09:15:05 2024 00:36:44.605 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10004msec) 00:36:44.605 slat (nsec): min=5655, max=32620, avg=6657.02, stdev=1697.82 00:36:44.605 clat (usec): min=41063, max=44205, avg=42012.05, stdev=226.27 00:36:44.605 lat (usec): min=41069, max=44238, avg=42018.70, stdev=226.74 00:36:44.605 clat percentiles (usec): 00:36:44.605 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:36:44.605 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:44.605 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:44.605 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:36:44.605 | 99.99th=[44303] 00:36:44.605 bw ( KiB/s): min= 352, max= 384, per=99.57%, avg=379.20, stdev=11.72, samples=20 00:36:44.605 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:36:44.605 lat (msec) : 50=100.00% 00:36:44.605 cpu : usr=95.58%, sys=4.21%, ctx=12, majf=0, minf=217 00:36:44.605 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.605 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.605 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:44.605 00:36:44.605 Run status group 0 (all jobs): 00:36:44.605 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10004-10004msec 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.605 00:36:44.605 real 0m11.122s 00:36:44.605 user 0m23.434s 00:36:44.605 sys 0m0.693s 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:44.605 09:15:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:44.605 ************************************ 00:36:44.605 END TEST fio_dif_1_default 00:36:44.605 ************************************ 00:36:44.605 09:15:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:44.605 09:15:05 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:36:44.605 09:15:05 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 ************************************ 00:36:44.606 START TEST fio_dif_1_multi_subsystems 00:36:44.606 ************************************ 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 bdev_null0 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 [2024-06-09 09:15:05.651722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 bdev_null1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:44.606 { 00:36:44.606 "params": { 00:36:44.606 "name": "Nvme$subsystem", 00:36:44.606 "trtype": "$TEST_TRANSPORT", 00:36:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.606 "adrfam": "ipv4", 00:36:44.606 "trsvcid": "$NVMF_PORT", 00:36:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.606 "hdgst": ${hdgst:-false}, 00:36:44.606 "ddgst": ${ddgst:-false} 00:36:44.606 }, 00:36:44.606 "method": "bdev_nvme_attach_controller" 00:36:44.606 } 00:36:44.606 EOF 00:36:44.606 )") 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:44.606 { 00:36:44.606 "params": { 00:36:44.606 "name": "Nvme$subsystem", 00:36:44.606 "trtype": "$TEST_TRANSPORT", 00:36:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.606 "adrfam": "ipv4", 00:36:44.606 "trsvcid": "$NVMF_PORT", 00:36:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.606 "hdgst": ${hdgst:-false}, 00:36:44.606 "ddgst": ${ddgst:-false} 00:36:44.606 }, 00:36:44.606 "method": "bdev_nvme_attach_controller" 00:36:44.606 } 00:36:44.606 EOF 00:36:44.606 )") 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:44.606 "params": { 00:36:44.606 "name": "Nvme0", 00:36:44.606 "trtype": "tcp", 00:36:44.606 "traddr": "10.0.0.2", 00:36:44.606 "adrfam": "ipv4", 00:36:44.606 "trsvcid": "4420", 00:36:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.606 "hdgst": false, 00:36:44.606 "ddgst": false 00:36:44.606 }, 00:36:44.606 "method": "bdev_nvme_attach_controller" 00:36:44.606 },{ 00:36:44.606 "params": { 00:36:44.606 "name": "Nvme1", 00:36:44.606 "trtype": "tcp", 00:36:44.606 "traddr": "10.0.0.2", 00:36:44.606 "adrfam": "ipv4", 00:36:44.606 "trsvcid": "4420", 00:36:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:44.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:44.606 "hdgst": false, 00:36:44.606 "ddgst": false 00:36:44.606 }, 00:36:44.606 "method": "bdev_nvme_attach_controller" 00:36:44.606 }' 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.606 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.607 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:36:44.607 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:36:44.607 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:36:44.607 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:36:44.607 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:44.607 09:15:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.607 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:44.607 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:44.607 fio-3.35 00:36:44.607 Starting 2 threads 00:36:44.607 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.612 00:36:54.612 filename0: (groupid=0, jobs=1): err= 0: pid=2865246: Sun Jun 9 09:15:16 2024 00:36:54.612 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:36:54.612 slat (nsec): min=5661, max=36524, avg=6958.69, stdev=1699.02 00:36:54.612 clat (usec): min=41614, max=43689, avg=41995.84, stdev=152.13 00:36:54.612 lat (usec): min=41620, max=43726, avg=42002.80, stdev=152.64 00:36:54.612 clat percentiles (usec): 00:36:54.612 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:36:54.612 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:54.612 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:54.612 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:36:54.612 | 99.99th=[43779] 00:36:54.612 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:36:54.612 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:36:54.612 lat (msec) : 50=100.00% 00:36:54.612 cpu : usr=98.36%, sys=1.43%, ctx=11, majf=0, minf=94 00:36:54.612 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.612 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.612 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:54.612 filename1: (groupid=0, jobs=1): err= 0: pid=2865247: Sun Jun 9 09:15:16 2024 00:36:54.612 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10037msec) 00:36:54.612 slat (nsec): min=5666, max=36082, avg=7214.80, stdev=2126.20 00:36:54.612 clat (usec): min=41011, max=42920, avg=41975.35, stdev=139.23 00:36:54.612 lat (usec): min=41017, max=42955, avg=41982.56, stdev=139.61 00:36:54.612 clat percentiles (usec): 00:36:54.612 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:36:54.612 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:54.612 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:54.612 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:54.612 | 99.99th=[42730] 00:36:54.612 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:36:54.612 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:36:54.612 lat (msec) : 50=100.00% 00:36:54.612 cpu : usr=97.94%, sys=1.83%, ctx=11, majf=0, minf=148 00:36:54.612 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.612 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.612 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:54.612 00:36:54.612 Run status group 0 (all jobs): 00:36:54.612 READ: bw=762KiB/s (780kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10037-10042msec 00:36:54.612 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:54.612 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:54.612 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:54.612 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:54.612 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:54.612 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:54.612 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.613 00:36:54.613 real 0m11.452s 00:36:54.613 user 0m33.028s 00:36:54.613 sys 0m0.675s 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 ************************************ 00:36:54.613 END TEST fio_dif_1_multi_subsystems 00:36:54.613 ************************************ 00:36:54.613 09:15:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:54.613 09:15:17 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:36:54.613 09:15:17 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 ************************************ 00:36:54.613 START TEST fio_dif_rand_params 00:36:54.613 ************************************ 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 bdev_null0 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.613 [2024-06-09 09:15:17.159388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:54.613 { 00:36:54.613 "params": { 00:36:54.613 "name": "Nvme$subsystem", 00:36:54.613 "trtype": "$TEST_TRANSPORT", 00:36:54.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.613 "adrfam": "ipv4", 00:36:54.613 "trsvcid": "$NVMF_PORT", 00:36:54.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.613 "hdgst": ${hdgst:-false}, 00:36:54.613 "ddgst": ${ddgst:-false} 00:36:54.613 }, 00:36:54.613 "method": "bdev_nvme_attach_controller" 00:36:54.613 } 00:36:54.613 EOF 00:36:54.613 )") 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.613 09:15:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:54.932 "params": { 00:36:54.932 "name": "Nvme0", 00:36:54.932 "trtype": "tcp", 00:36:54.932 "traddr": "10.0.0.2", 00:36:54.932 "adrfam": "ipv4", 00:36:54.932 "trsvcid": "4420", 00:36:54.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:54.932 "hdgst": false, 00:36:54.932 "ddgst": false 00:36:54.932 }, 00:36:54.932 "method": "bdev_nvme_attach_controller" 00:36:54.932 }' 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:54.932 09:15:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:55.204 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:55.204 ... 00:36:55.204 fio-3.35 00:36:55.204 Starting 3 threads 00:36:55.204 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.787 00:37:01.787 filename0: (groupid=0, jobs=1): err= 0: pid=2867930: Sun Jun 9 09:15:23 2024 00:37:01.787 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(116MiB/5020msec) 00:37:01.787 slat (nsec): min=5691, max=33420, avg=8515.86, stdev=1788.06 00:37:01.787 clat (usec): min=6785, max=95255, avg=16208.58, stdev=14742.36 00:37:01.787 lat (usec): min=6794, max=95264, avg=16217.09, stdev=14742.44 00:37:01.787 clat percentiles (usec): 00:37:01.787 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 9372], 00:37:01.787 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[11994], 00:37:01.787 | 70.00th=[12780], 80.00th=[13566], 90.00th=[51643], 95.00th=[53740], 00:37:01.787 | 99.00th=[56886], 99.50th=[93848], 99.90th=[94897], 99.95th=[94897], 00:37:01.787 | 99.99th=[94897] 00:37:01.787 bw ( KiB/s): min=18688, max=31744, per=36.08%, avg=23680.00, stdev=4069.69, samples=10 00:37:01.787 iops : min= 146, max= 248, avg=185.00, stdev=31.79, samples=10 00:37:01.787 lat (msec) : 10=29.42%, 20=58.84%, 50=0.32%, 100=11.42% 00:37:01.787 cpu : usr=94.54%, sys=4.32%, ctx=432, majf=0, minf=110 00:37:01.787 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:01.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.787 issued rwts: total=928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.787 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:01.787 filename0: (groupid=0, jobs=1): err= 0: pid=2867931: Sun Jun 9 09:15:23 2024 00:37:01.787 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(103MiB/5004msec) 00:37:01.787 slat (nsec): min=5684, max=33500, avg=8430.08, stdev=1813.67 00:37:01.787 clat (usec): min=5884, max=95431, avg=18140.15, stdev=15368.01 00:37:01.787 lat (usec): min=5894, max=95440, avg=18148.58, stdev=15368.20 00:37:01.787 clat percentiles (usec): 00:37:01.787 | 1.00th=[ 6652], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9896], 00:37:01.787 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12649], 60.00th=[13698], 00:37:01.787 | 70.00th=[14746], 80.00th=[16319], 90.00th=[52691], 95.00th=[54789], 00:37:01.787 | 99.00th=[58459], 99.50th=[69731], 99.90th=[95945], 99.95th=[95945], 00:37:01.787 | 99.99th=[95945] 00:37:01.787 bw ( KiB/s): min=14592, max=32768, per=32.24%, avg=21162.67, stdev=5183.60, samples=9 00:37:01.787 iops : min= 114, max= 256, avg=165.33, stdev=40.50, samples=9 00:37:01.787 lat (msec) : 10=21.04%, 20=64.45%, 50=1.21%, 100=13.30% 00:37:01.787 cpu : usr=96.58%, sys=3.10%, ctx=8, majf=0, minf=77 00:37:01.787 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:01.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.787 issued rwts: total=827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.787 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:01.787 filename0: (groupid=0, jobs=1): err= 0: pid=2867932: Sun Jun 9 09:15:23 2024 00:37:01.787 read: IOPS=165, BW=20.6MiB/s (21.7MB/s)(104MiB/5049msec) 00:37:01.787 slat (nsec): min=5695, max=32515, avg=8237.36, stdev=1763.65 00:37:01.787 clat (usec): min=6329, max=99055, avg=18097.53, stdev=15452.86 00:37:01.787 lat (usec): min=6337, max=99064, avg=18105.76, stdev=15452.90 00:37:01.787 clat percentiles (usec): 00:37:01.787 | 1.00th=[ 6915], 5.00th=[ 8160], 10.00th=[ 9372], 20.00th=[10028], 00:37:01.787 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12387], 60.00th=[13304], 00:37:01.787 | 70.00th=[14484], 80.00th=[16581], 90.00th=[52167], 95.00th=[55837], 00:37:01.787 | 99.00th=[61080], 99.50th=[62129], 99.90th=[99091], 99.95th=[99091], 00:37:01.787 | 99.99th=[99091] 00:37:01.787 bw ( KiB/s): min=16128, max=25856, per=32.41%, avg=21273.60, stdev=3933.58, samples=10 00:37:01.787 iops : min= 126, max= 202, avg=166.20, stdev=30.73, samples=10 00:37:01.787 lat (msec) : 10=17.39%, 20=68.35%, 50=1.32%, 100=12.95% 00:37:01.787 cpu : usr=96.73%, sys=2.95%, ctx=12, majf=0, minf=86 00:37:01.787 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:01.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.787 issued rwts: total=834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.787 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:01.787 00:37:01.787 Run status group 0 (all jobs): 00:37:01.787 READ: bw=64.1MiB/s (67.2MB/s), 20.6MiB/s-23.1MiB/s (21.7MB/s-24.2MB/s), io=324MiB (339MB), run=5004-5049msec 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.787 bdev_null0 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.787 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 [2024-06-09 09:15:23.391647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 bdev_null1 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 bdev_null2 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:01.788 { 00:37:01.788 "params": { 00:37:01.788 "name": "Nvme$subsystem", 00:37:01.788 "trtype": "$TEST_TRANSPORT", 00:37:01.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:01.788 "adrfam": "ipv4", 00:37:01.788 "trsvcid": "$NVMF_PORT", 00:37:01.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:01.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:01.788 "hdgst": ${hdgst:-false}, 00:37:01.788 "ddgst": ${ddgst:-false} 00:37:01.788 }, 00:37:01.788 "method": "bdev_nvme_attach_controller" 00:37:01.788 } 00:37:01.788 EOF 00:37:01.788 )") 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:01.788 { 00:37:01.788 "params": { 00:37:01.788 "name": "Nvme$subsystem", 00:37:01.788 "trtype": "$TEST_TRANSPORT", 00:37:01.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:01.788 "adrfam": "ipv4", 00:37:01.788 "trsvcid": "$NVMF_PORT", 00:37:01.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:01.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:01.788 "hdgst": ${hdgst:-false}, 00:37:01.788 "ddgst": ${ddgst:-false} 00:37:01.788 }, 00:37:01.788 "method": "bdev_nvme_attach_controller" 00:37:01.788 } 00:37:01.788 EOF 00:37:01.788 )") 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:01.788 { 00:37:01.788 "params": { 00:37:01.788 "name": "Nvme$subsystem", 00:37:01.788 "trtype": "$TEST_TRANSPORT", 00:37:01.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:01.788 "adrfam": "ipv4", 00:37:01.788 "trsvcid": "$NVMF_PORT", 00:37:01.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:01.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:01.788 "hdgst": ${hdgst:-false}, 00:37:01.788 "ddgst": ${ddgst:-false} 00:37:01.788 }, 00:37:01.788 "method": "bdev_nvme_attach_controller" 00:37:01.788 } 00:37:01.788 EOF 00:37:01.788 )") 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:01.788 09:15:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:01.788 "params": { 00:37:01.788 "name": "Nvme0", 00:37:01.788 "trtype": "tcp", 00:37:01.788 "traddr": "10.0.0.2", 00:37:01.788 "adrfam": "ipv4", 00:37:01.788 "trsvcid": "4420", 00:37:01.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:01.788 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:01.788 "hdgst": false, 00:37:01.788 "ddgst": false 00:37:01.788 }, 00:37:01.788 "method": "bdev_nvme_attach_controller" 00:37:01.788 },{ 00:37:01.788 "params": { 00:37:01.788 "name": "Nvme1", 00:37:01.789 "trtype": "tcp", 00:37:01.789 "traddr": "10.0.0.2", 00:37:01.789 "adrfam": "ipv4", 00:37:01.789 "trsvcid": "4420", 00:37:01.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:01.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:01.789 "hdgst": false, 00:37:01.789 "ddgst": false 00:37:01.789 }, 00:37:01.789 "method": "bdev_nvme_attach_controller" 00:37:01.789 },{ 00:37:01.789 "params": { 00:37:01.789 "name": "Nvme2", 00:37:01.789 "trtype": "tcp", 00:37:01.789 "traddr": "10.0.0.2", 00:37:01.789 "adrfam": "ipv4", 00:37:01.789 "trsvcid": "4420", 00:37:01.789 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:01.789 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:01.789 "hdgst": false, 00:37:01.789 "ddgst": false 00:37:01.789 }, 00:37:01.789 "method": "bdev_nvme_attach_controller" 00:37:01.789 }' 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:01.789 09:15:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:01.789 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:01.789 ... 00:37:01.789 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:01.789 ... 00:37:01.789 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:01.789 ... 00:37:01.789 fio-3.35 00:37:01.789 Starting 24 threads 00:37:01.789 EAL: No free 2048 kB hugepages reported on node 1 00:37:14.026 00:37:14.026 filename0: (groupid=0, jobs=1): err= 0: pid=2869396: Sun Jun 9 09:15:34 2024 00:37:14.026 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10026msec) 00:37:14.026 slat (usec): min=2, max=1222, avg= 7.32, stdev=16.74 00:37:14.026 clat (usec): min=11342, max=63543, avg=29858.15, stdev=6233.29 00:37:14.026 lat (usec): min=11350, max=63554, avg=29865.46, stdev=6233.44 00:37:14.026 clat percentiles (usec): 00:37:14.026 | 1.00th=[14746], 5.00th=[18744], 10.00th=[20841], 20.00th=[23462], 00:37:14.026 | 30.00th=[30540], 40.00th=[31327], 50.00th=[31851], 60.00th=[32113], 00:37:14.026 | 70.00th=[32375], 80.00th=[32375], 90.00th=[33817], 95.00th=[37487], 00:37:14.026 | 99.00th=[47449], 99.50th=[53216], 99.90th=[63701], 99.95th=[63701], 00:37:14.026 | 99.99th=[63701] 00:37:14.026 bw ( KiB/s): min= 1920, max= 2704, per=4.53%, avg=2138.55, stdev=211.71, samples=20 00:37:14.026 iops : min= 480, max= 676, avg=534.60, stdev=52.94, samples=20 00:37:14.026 lat (msec) : 20=6.73%, 50=92.56%, 100=0.71% 00:37:14.026 cpu : usr=99.12%, sys=0.59%, ctx=51, majf=0, minf=93 00:37:14.026 IO depths : 1=3.2%, 2=6.7%, 4=17.0%, 8=63.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:37:14.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.026 complete : 0=0.0%, 4=92.1%, 8=2.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.026 issued rwts: total=5363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.026 filename0: (groupid=0, jobs=1): err= 0: pid=2869397: Sun Jun 9 09:15:34 2024 00:37:14.026 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10010msec) 00:37:14.026 slat (nsec): min=5830, max=51687, avg=8626.26, stdev=4303.90 00:37:14.026 clat (usec): min=2113, max=57892, avg=31832.96, stdev=7122.27 00:37:14.026 lat (usec): min=2130, max=57910, avg=31841.59, stdev=7121.69 00:37:14.026 clat percentiles (usec): 00:37:14.026 | 1.00th=[ 3982], 5.00th=[20579], 10.00th=[22676], 20.00th=[30802], 00:37:14.026 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:37:14.026 | 70.00th=[32375], 80.00th=[33162], 90.00th=[41157], 95.00th=[44303], 00:37:14.026 | 99.00th=[51643], 99.50th=[52167], 99.90th=[57934], 99.95th=[57934], 00:37:14.026 | 99.99th=[57934] 00:37:14.026 bw ( KiB/s): min= 1784, max= 2682, per=4.24%, avg=2002.10, stdev=177.03, samples=20 00:37:14.026 iops : min= 446, max= 670, avg=500.50, stdev=44.16, samples=20 00:37:14.026 lat (msec) : 4=1.02%, 10=0.90%, 20=2.75%, 50=94.15%, 100=1.19% 00:37:14.026 cpu : usr=99.12%, sys=0.59%, ctx=22, majf=0, minf=42 00:37:14.026 IO depths : 1=1.2%, 2=4.7%, 4=17.6%, 8=64.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:37:14.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.026 complete : 0=0.0%, 4=92.6%, 8=2.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.026 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.026 filename0: (groupid=0, jobs=1): err= 0: pid=2869399: Sun Jun 9 09:15:34 2024 00:37:14.026 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10003msec) 00:37:14.026 slat (nsec): min=5598, max=83934, avg=15906.85, stdev=10553.12 00:37:14.026 clat (usec): min=10560, max=59165, avg=31975.25, stdev=3844.90 00:37:14.026 lat (usec): min=10567, max=59181, avg=31991.16, stdev=3845.36 00:37:14.026 clat percentiles (usec): 00:37:14.026 | 1.00th=[17695], 5.00th=[30016], 10.00th=[30540], 20.00th=[31327], 00:37:14.026 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.026 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:37:14.026 | 99.00th=[47449], 99.50th=[52167], 99.90th=[58983], 99.95th=[58983], 00:37:14.026 | 99.99th=[58983] 00:37:14.026 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1984.37, stdev=69.41, samples=19 00:37:14.026 iops : min= 448, max= 512, avg=496.05, stdev=17.33, samples=19 00:37:14.026 lat (msec) : 20=1.85%, 50=97.55%, 100=0.60% 00:37:14.026 cpu : usr=95.69%, sys=2.06%, ctx=103, majf=0, minf=40 00:37:14.026 IO depths : 1=1.7%, 2=6.7%, 4=21.0%, 8=59.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:37:14.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.026 complete : 0=0.0%, 4=93.5%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.026 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.026 filename0: (groupid=0, jobs=1): err= 0: pid=2869400: Sun Jun 9 09:15:34 2024 00:37:14.026 read: IOPS=459, BW=1837KiB/s (1882kB/s)(18.0MiB/10018msec) 00:37:14.026 slat (usec): min=5, max=100, avg=17.38, stdev=12.04 00:37:14.026 clat (usec): min=18011, max=60819, avg=34673.87, stdev=5996.24 00:37:14.026 lat (usec): min=18018, max=60828, avg=34691.25, stdev=5994.40 00:37:14.026 clat percentiles (usec): 00:37:14.026 | 1.00th=[19530], 5.00th=[27132], 10.00th=[30802], 20.00th=[31589], 00:37:14.026 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32637], 00:37:14.026 | 70.00th=[35914], 80.00th=[40109], 90.00th=[44303], 95.00th=[45351], 00:37:14.026 | 99.00th=[50070], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:37:14.026 | 99.99th=[61080] 00:37:14.026 bw ( KiB/s): min= 1488, max= 2024, per=3.89%, avg=1837.55, stdev=143.31, samples=20 00:37:14.026 iops : min= 372, max= 506, avg=459.35, stdev=35.86, samples=20 00:37:14.026 lat (msec) : 20=1.33%, 50=97.35%, 100=1.33% 00:37:14.026 cpu : usr=98.52%, sys=1.04%, ctx=102, majf=0, minf=54 00:37:14.026 IO depths : 1=3.0%, 2=6.1%, 4=15.7%, 8=64.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:37:14.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.026 complete : 0=0.0%, 4=92.0%, 8=3.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.026 issued rwts: total=4602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.026 filename0: (groupid=0, jobs=1): err= 0: pid=2869401: Sun Jun 9 09:15:34 2024 00:37:14.026 read: IOPS=467, BW=1868KiB/s (1913kB/s)(18.3MiB/10019msec) 00:37:14.026 slat (nsec): min=5854, max=74038, avg=16855.15, stdev=11086.45 00:37:14.026 clat (usec): min=17443, max=55640, avg=34086.65, stdev=5326.00 00:37:14.026 lat (usec): min=17452, max=55658, avg=34103.51, stdev=5324.28 00:37:14.026 clat percentiles (usec): 00:37:14.026 | 1.00th=[20317], 5.00th=[29230], 10.00th=[30802], 20.00th=[31327], 00:37:14.026 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:37:14.026 | 70.00th=[32900], 80.00th=[39584], 90.00th=[43254], 95.00th=[44827], 00:37:14.026 | 99.00th=[48497], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:37:14.026 | 99.99th=[55837] 00:37:14.027 bw ( KiB/s): min= 1536, max= 2048, per=3.97%, avg=1873.05, stdev=153.01, samples=19 00:37:14.027 iops : min= 384, max= 512, avg=468.26, stdev=38.25, samples=19 00:37:14.027 lat (msec) : 20=0.62%, 50=98.76%, 100=0.62% 00:37:14.027 cpu : usr=98.65%, sys=1.02%, ctx=15, majf=0, minf=43 00:37:14.027 IO depths : 1=4.3%, 2=8.5%, 4=19.7%, 8=58.6%, 16=9.0%, 32=0.0%, >=64=0.0% 00:37:14.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 complete : 0=0.0%, 4=92.8%, 8=2.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 issued rwts: total=4680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.027 filename0: (groupid=0, jobs=1): err= 0: pid=2869402: Sun Jun 9 09:15:34 2024 00:37:14.027 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10024msec) 00:37:14.027 slat (nsec): min=5724, max=93190, avg=15537.20, stdev=10086.24 00:37:14.027 clat (usec): min=16210, max=62347, avg=33116.89, stdev=4769.49 00:37:14.027 lat (usec): min=16224, max=62364, avg=33132.43, stdev=4769.03 00:37:14.027 clat percentiles (usec): 00:37:14.027 | 1.00th=[20841], 5.00th=[26608], 10.00th=[30540], 20.00th=[31327], 00:37:14.027 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:37:14.027 | 70.00th=[32637], 80.00th=[33162], 90.00th=[41157], 95.00th=[43779], 00:37:14.027 | 99.00th=[49546], 99.50th=[50594], 99.90th=[61604], 99.95th=[62129], 00:37:14.027 | 99.99th=[62129] 00:37:14.027 bw ( KiB/s): min= 1792, max= 2048, per=4.07%, avg=1922.80, stdev=91.22, samples=20 00:37:14.027 iops : min= 448, max= 512, avg=480.70, stdev=22.81, samples=20 00:37:14.027 lat (msec) : 20=0.73%, 50=98.51%, 100=0.77% 00:37:14.027 cpu : usr=95.61%, sys=2.27%, ctx=60, majf=0, minf=61 00:37:14.027 IO depths : 1=3.3%, 2=6.6%, 4=16.7%, 8=63.2%, 16=10.2%, 32=0.0%, >=64=0.0% 00:37:14.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 complete : 0=0.0%, 4=92.1%, 8=3.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 issued rwts: total=4823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.027 filename0: (groupid=0, jobs=1): err= 0: pid=2869403: Sun Jun 9 09:15:34 2024 00:37:14.027 read: IOPS=509, BW=2039KiB/s (2088kB/s)(19.9MiB/10013msec) 00:37:14.027 slat (nsec): min=5857, max=67371, avg=11571.19, stdev=6985.76 00:37:14.027 clat (usec): min=10524, max=41799, avg=31285.18, stdev=2850.46 00:37:14.027 lat (usec): min=10541, max=41826, avg=31296.75, stdev=2851.15 00:37:14.027 clat percentiles (usec): 00:37:14.027 | 1.00th=[19268], 5.00th=[23462], 10.00th=[30540], 20.00th=[31065], 00:37:14.027 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.027 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:37:14.027 | 99.00th=[33817], 99.50th=[34866], 99.90th=[39584], 99.95th=[39584], 00:37:14.027 | 99.99th=[41681] 00:37:14.027 bw ( KiB/s): min= 1920, max= 2256, per=4.31%, avg=2035.20, stdev=86.02, samples=20 00:37:14.027 iops : min= 480, max= 564, avg=508.80, stdev=21.51, samples=20 00:37:14.027 lat (msec) : 20=1.25%, 50=98.75% 00:37:14.027 cpu : usr=98.89%, sys=0.79%, ctx=12, majf=0, minf=52 00:37:14.027 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:14.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.027 filename0: (groupid=0, jobs=1): err= 0: pid=2869404: Sun Jun 9 09:15:34 2024 00:37:14.027 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10009msec) 00:37:14.027 slat (nsec): min=5863, max=57591, avg=12001.50, stdev=7623.75 00:37:14.027 clat (usec): min=17470, max=55744, avg=31928.81, stdev=1677.83 00:37:14.027 lat (usec): min=17478, max=55752, avg=31940.81, stdev=1678.05 00:37:14.027 clat percentiles (usec): 00:37:14.027 | 1.00th=[28967], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:37:14.027 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:14.027 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:37:14.027 | 99.00th=[34341], 99.50th=[40109], 99.90th=[54789], 99.95th=[54789], 00:37:14.027 | 99.99th=[55837] 00:37:14.027 bw ( KiB/s): min= 1920, max= 2104, per=4.23%, avg=1995.80, stdev=67.33, samples=20 00:37:14.027 iops : min= 480, max= 526, avg=498.95, stdev=16.83, samples=20 00:37:14.027 lat (msec) : 20=0.14%, 50=99.66%, 100=0.20% 00:37:14.027 cpu : usr=99.00%, sys=0.68%, ctx=64, majf=0, minf=57 00:37:14.027 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:14.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.027 filename1: (groupid=0, jobs=1): err= 0: pid=2869405: Sun Jun 9 09:15:34 2024 00:37:14.027 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10011msec) 00:37:14.027 slat (nsec): min=5852, max=59680, avg=15233.12, stdev=9713.91 00:37:14.027 clat (usec): min=10769, max=63211, avg=31864.39, stdev=2150.04 00:37:14.027 lat (usec): min=10776, max=63232, avg=31879.62, stdev=2150.59 00:37:14.027 clat percentiles (usec): 00:37:14.027 | 1.00th=[24249], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:37:14.027 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.027 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:37:14.027 | 99.00th=[34866], 99.50th=[41157], 99.90th=[49546], 99.95th=[50070], 00:37:14.027 | 99.99th=[63177] 00:37:14.027 bw ( KiB/s): min= 1907, max= 2048, per=4.23%, avg=1996.70, stdev=59.55, samples=20 00:37:14.027 iops : min= 476, max= 512, avg=499.10, stdev=14.92, samples=20 00:37:14.027 lat (msec) : 20=0.52%, 50=99.44%, 100=0.04% 00:37:14.027 cpu : usr=98.90%, sys=0.73%, ctx=21, majf=0, minf=58 00:37:14.027 IO depths : 1=2.9%, 2=9.1%, 4=24.9%, 8=53.5%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:14.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.027 filename1: (groupid=0, jobs=1): err= 0: pid=2869406: Sun Jun 9 09:15:34 2024 00:37:14.027 read: IOPS=498, BW=1996KiB/s (2043kB/s)(19.5MiB/10006msec) 00:37:14.027 slat (nsec): min=5922, max=91453, avg=10362.73, stdev=8243.84 00:37:14.027 clat (usec): min=21554, max=51500, avg=31980.83, stdev=1494.54 00:37:14.027 lat (usec): min=21561, max=51521, avg=31991.20, stdev=1494.56 00:37:14.027 clat percentiles (usec): 00:37:14.027 | 1.00th=[29754], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:37:14.027 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:14.027 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:37:14.027 | 99.00th=[34341], 99.50th=[34866], 99.90th=[51643], 99.95th=[51643], 00:37:14.027 | 99.99th=[51643] 00:37:14.027 bw ( KiB/s): min= 1795, max= 2048, per=4.23%, avg=1994.05, stdev=77.48, samples=19 00:37:14.027 iops : min= 448, max= 512, avg=498.47, stdev=19.48, samples=19 00:37:14.027 lat (msec) : 50=99.68%, 100=0.32% 00:37:14.027 cpu : usr=99.25%, sys=0.46%, ctx=13, majf=0, minf=48 00:37:14.027 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:14.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.027 filename1: (groupid=0, jobs=1): err= 0: pid=2869407: Sun Jun 9 09:15:34 2024 00:37:14.027 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10023msec) 00:37:14.027 slat (nsec): min=5850, max=79071, avg=19561.24, stdev=12292.71 00:37:14.027 clat (usec): min=12909, max=58818, avg=32115.38, stdev=3539.16 00:37:14.027 lat (usec): min=12945, max=58843, avg=32134.94, stdev=3538.52 00:37:14.027 clat percentiles (usec): 00:37:14.027 | 1.00th=[20841], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:37:14.027 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.027 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34341], 00:37:14.027 | 99.00th=[50594], 99.50th=[54789], 99.90th=[58983], 99.95th=[58983], 00:37:14.027 | 99.99th=[58983] 00:37:14.027 bw ( KiB/s): min= 1747, max= 2048, per=4.20%, avg=1983.10, stdev=70.95, samples=20 00:37:14.027 iops : min= 436, max= 512, avg=495.70, stdev=17.84, samples=20 00:37:14.027 lat (msec) : 20=0.84%, 50=98.15%, 100=1.01% 00:37:14.027 cpu : usr=98.93%, sys=0.66%, ctx=113, majf=0, minf=62 00:37:14.027 IO depths : 1=0.4%, 2=1.3%, 4=9.2%, 8=72.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:37:14.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 complete : 0=0.0%, 4=93.7%, 8=1.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 issued rwts: total=4974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.027 filename1: (groupid=0, jobs=1): err= 0: pid=2869408: Sun Jun 9 09:15:34 2024 00:37:14.027 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10021msec) 00:37:14.027 slat (nsec): min=5836, max=99612, avg=15609.57, stdev=11986.78 00:37:14.027 clat (usec): min=15288, max=56172, avg=32631.53, stdev=4630.60 00:37:14.027 lat (usec): min=15297, max=56181, avg=32647.14, stdev=4631.17 00:37:14.027 clat percentiles (usec): 00:37:14.027 | 1.00th=[20841], 5.00th=[25560], 10.00th=[30016], 20.00th=[31065], 00:37:14.027 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:37:14.027 | 70.00th=[32375], 80.00th=[32900], 90.00th=[39584], 95.00th=[42206], 00:37:14.027 | 99.00th=[48497], 99.50th=[50594], 99.90th=[54789], 99.95th=[56361], 00:37:14.027 | 99.99th=[56361] 00:37:14.027 bw ( KiB/s): min= 1792, max= 2112, per=4.13%, avg=1952.00, stdev=93.44, samples=20 00:37:14.027 iops : min= 448, max= 528, avg=488.00, stdev=23.36, samples=20 00:37:14.027 lat (msec) : 20=0.57%, 50=98.82%, 100=0.61% 00:37:14.027 cpu : usr=98.40%, sys=1.19%, ctx=92, majf=0, minf=59 00:37:14.027 IO depths : 1=3.7%, 2=7.3%, 4=17.3%, 8=62.1%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:14.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 complete : 0=0.0%, 4=92.3%, 8=2.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.027 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.027 filename1: (groupid=0, jobs=1): err= 0: pid=2869410: Sun Jun 9 09:15:34 2024 00:37:14.028 read: IOPS=498, BW=1996KiB/s (2044kB/s)(19.5MiB/10005msec) 00:37:14.028 slat (nsec): min=5841, max=49250, avg=10915.10, stdev=6118.38 00:37:14.028 clat (usec): min=17405, max=61768, avg=31957.80, stdev=1658.25 00:37:14.028 lat (usec): min=17411, max=61783, avg=31968.71, stdev=1658.21 00:37:14.028 clat percentiles (usec): 00:37:14.028 | 1.00th=[29754], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:37:14.028 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:14.028 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:37:14.028 | 99.00th=[33817], 99.50th=[34866], 99.90th=[51643], 99.95th=[51643], 00:37:14.028 | 99.99th=[61604] 00:37:14.028 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1993.37, stdev=77.54, samples=19 00:37:14.028 iops : min= 448, max= 512, avg=498.26, stdev=19.33, samples=19 00:37:14.028 lat (msec) : 20=0.24%, 50=99.44%, 100=0.32% 00:37:14.028 cpu : usr=99.23%, sys=0.49%, ctx=13, majf=0, minf=56 00:37:14.028 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:14.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.028 filename1: (groupid=0, jobs=1): err= 0: pid=2869411: Sun Jun 9 09:15:34 2024 00:37:14.028 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.5MiB/10002msec) 00:37:14.028 slat (nsec): min=5886, max=94476, avg=20531.27, stdev=14025.23 00:37:14.028 clat (usec): min=13078, max=57629, avg=31862.63, stdev=1974.96 00:37:14.028 lat (usec): min=13085, max=57648, avg=31883.16, stdev=1974.90 00:37:14.028 clat percentiles (usec): 00:37:14.028 | 1.00th=[29230], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:37:14.028 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.028 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:37:14.028 | 99.00th=[34341], 99.50th=[34341], 99.90th=[57410], 99.95th=[57410], 00:37:14.028 | 99.99th=[57410] 00:37:14.028 bw ( KiB/s): min= 1795, max= 2048, per=4.21%, avg=1987.00, stdev=77.48, samples=19 00:37:14.028 iops : min= 448, max= 512, avg=496.63, stdev=19.41, samples=19 00:37:14.028 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:37:14.028 cpu : usr=99.17%, sys=0.55%, ctx=11, majf=0, minf=48 00:37:14.028 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:14.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.028 filename1: (groupid=0, jobs=1): err= 0: pid=2869412: Sun Jun 9 09:15:34 2024 00:37:14.028 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.8MiB/10059msec) 00:37:14.028 slat (nsec): min=5847, max=63086, avg=10010.23, stdev=6312.67 00:37:14.028 clat (usec): min=17543, max=70948, avg=33274.43, stdev=6631.72 00:37:14.028 lat (usec): min=17550, max=70956, avg=33284.44, stdev=6631.90 00:37:14.028 clat percentiles (usec): 00:37:14.028 | 1.00th=[19006], 5.00th=[21627], 10.00th=[25297], 20.00th=[30802], 00:37:14.028 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:37:14.028 | 70.00th=[32900], 80.00th=[38536], 90.00th=[42730], 95.00th=[44827], 00:37:14.028 | 99.00th=[52167], 99.50th=[52691], 99.90th=[62129], 99.95th=[70779], 00:37:14.028 | 99.99th=[70779] 00:37:14.028 bw ( KiB/s): min= 1696, max= 2080, per=4.07%, avg=1922.60, stdev=107.32, samples=20 00:37:14.028 iops : min= 424, max= 520, avg=480.65, stdev=26.83, samples=20 00:37:14.028 lat (msec) : 20=2.34%, 50=95.95%, 100=1.70% 00:37:14.028 cpu : usr=98.76%, sys=0.86%, ctx=89, majf=0, minf=61 00:37:14.028 IO depths : 1=0.5%, 2=2.6%, 4=14.0%, 8=69.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:37:14.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 complete : 0=0.0%, 4=91.9%, 8=3.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 issued rwts: total=4819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.028 filename1: (groupid=0, jobs=1): err= 0: pid=2869413: Sun Jun 9 09:15:34 2024 00:37:14.028 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.5MiB/10003msec) 00:37:14.028 slat (nsec): min=5921, max=90596, avg=20653.75, stdev=13637.16 00:37:14.028 clat (usec): min=13133, max=58766, avg=31860.41, stdev=2012.72 00:37:14.028 lat (usec): min=13140, max=58783, avg=31881.06, stdev=2012.72 00:37:14.028 clat percentiles (usec): 00:37:14.028 | 1.00th=[29230], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:37:14.028 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.028 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:37:14.028 | 99.00th=[34341], 99.50th=[34341], 99.90th=[58983], 99.95th=[58983], 00:37:14.028 | 99.99th=[58983] 00:37:14.028 bw ( KiB/s): min= 1792, max= 2048, per=4.21%, avg=1986.84, stdev=77.89, samples=19 00:37:14.028 iops : min= 448, max= 512, avg=496.63, stdev=19.41, samples=19 00:37:14.028 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:37:14.028 cpu : usr=99.04%, sys=0.67%, ctx=10, majf=0, minf=54 00:37:14.028 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:14.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.028 filename2: (groupid=0, jobs=1): err= 0: pid=2869414: Sun Jun 9 09:15:34 2024 00:37:14.028 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10018msec) 00:37:14.028 slat (nsec): min=5838, max=95391, avg=15711.83, stdev=11909.95 00:37:14.028 clat (usec): min=15869, max=59468, avg=32337.97, stdev=3739.10 00:37:14.028 lat (usec): min=15879, max=59478, avg=32353.68, stdev=3739.71 00:37:14.028 clat percentiles (usec): 00:37:14.028 | 1.00th=[21890], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:37:14.028 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:14.028 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[39060], 00:37:14.028 | 99.00th=[47449], 99.50th=[53216], 99.90th=[58459], 99.95th=[59507], 00:37:14.028 | 99.99th=[59507] 00:37:14.028 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1970.30, stdev=82.03, samples=20 00:37:14.028 iops : min= 448, max= 512, avg=492.50, stdev=20.56, samples=20 00:37:14.028 lat (msec) : 20=0.73%, 50=98.54%, 100=0.73% 00:37:14.028 cpu : usr=98.75%, sys=0.84%, ctx=111, majf=0, minf=54 00:37:14.028 IO depths : 1=0.7%, 2=1.5%, 4=8.9%, 8=76.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:37:14.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 complete : 0=0.0%, 4=89.7%, 8=4.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.028 filename2: (groupid=0, jobs=1): err= 0: pid=2869415: Sun Jun 9 09:15:34 2024 00:37:14.028 read: IOPS=507, BW=2031KiB/s (2080kB/s)(19.9MiB/10019msec) 00:37:14.028 slat (nsec): min=5908, max=95897, avg=16031.76, stdev=12633.01 00:37:14.028 clat (usec): min=2347, max=46426, avg=31376.48, stdev=3692.71 00:37:14.028 lat (usec): min=2364, max=46436, avg=31392.51, stdev=3692.94 00:37:14.028 clat percentiles (usec): 00:37:14.028 | 1.00th=[ 3916], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:37:14.028 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.028 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:37:14.028 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:37:14.028 | 99.99th=[46400] 00:37:14.028 bw ( KiB/s): min= 1920, max= 2560, per=4.30%, avg=2028.55, stdev=139.41, samples=20 00:37:14.028 iops : min= 480, max= 640, avg=507.10, stdev=34.85, samples=20 00:37:14.028 lat (msec) : 4=1.12%, 10=0.45%, 20=0.35%, 50=98.07% 00:37:14.028 cpu : usr=99.16%, sys=0.54%, ctx=73, majf=0, minf=39 00:37:14.028 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:14.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.028 filename2: (groupid=0, jobs=1): err= 0: pid=2869416: Sun Jun 9 09:15:34 2024 00:37:14.028 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.5MiB/10002msec) 00:37:14.028 slat (nsec): min=5841, max=74287, avg=17498.03, stdev=11796.53 00:37:14.028 clat (usec): min=22416, max=48350, avg=31920.10, stdev=1398.56 00:37:14.028 lat (usec): min=22422, max=48376, avg=31937.59, stdev=1399.27 00:37:14.028 clat percentiles (usec): 00:37:14.028 | 1.00th=[29754], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:37:14.028 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.028 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:37:14.028 | 99.00th=[34866], 99.50th=[40109], 99.90th=[48497], 99.95th=[48497], 00:37:14.028 | 99.99th=[48497] 00:37:14.028 bw ( KiB/s): min= 1912, max= 2048, per=4.22%, avg=1993.63, stdev=59.06, samples=19 00:37:14.028 iops : min= 478, max= 512, avg=498.37, stdev=14.73, samples=19 00:37:14.028 lat (msec) : 50=100.00% 00:37:14.028 cpu : usr=98.69%, sys=0.86%, ctx=58, majf=0, minf=49 00:37:14.028 IO depths : 1=2.8%, 2=7.0%, 4=24.9%, 8=55.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:37:14.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.028 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.028 filename2: (groupid=0, jobs=1): err= 0: pid=2869417: Sun Jun 9 09:15:34 2024 00:37:14.028 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10014msec) 00:37:14.028 slat (nsec): min=5859, max=80263, avg=13659.26, stdev=12771.99 00:37:14.028 clat (usec): min=25940, max=58889, avg=31980.27, stdev=1485.91 00:37:14.028 lat (usec): min=25950, max=58911, avg=31993.93, stdev=1485.98 00:37:14.028 clat percentiles (usec): 00:37:14.028 | 1.00th=[29492], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:37:14.028 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.028 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:37:14.028 | 99.00th=[34341], 99.50th=[34341], 99.90th=[52167], 99.95th=[58983], 00:37:14.028 | 99.99th=[58983] 00:37:14.028 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1990.15, stdev=77.23, samples=20 00:37:14.029 iops : min= 448, max= 512, avg=497.50, stdev=19.28, samples=20 00:37:14.029 lat (msec) : 50=99.68%, 100=0.32% 00:37:14.029 cpu : usr=99.12%, sys=0.60%, ctx=13, majf=0, minf=53 00:37:14.029 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.029 filename2: (groupid=0, jobs=1): err= 0: pid=2869418: Sun Jun 9 09:15:34 2024 00:37:14.029 read: IOPS=473, BW=1892KiB/s (1937kB/s)(18.5MiB/10006msec) 00:37:14.029 slat (nsec): min=5840, max=55273, avg=13539.14, stdev=9017.08 00:37:14.029 clat (usec): min=12101, max=60729, avg=33747.30, stdev=6167.35 00:37:14.029 lat (usec): min=12107, max=60745, avg=33760.84, stdev=6166.92 00:37:14.029 clat percentiles (usec): 00:37:14.029 | 1.00th=[18744], 5.00th=[25297], 10.00th=[28967], 20.00th=[31065], 00:37:14.029 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:37:14.029 | 70.00th=[32900], 80.00th=[38011], 90.00th=[43254], 95.00th=[46400], 00:37:14.029 | 99.00th=[51643], 99.50th=[54264], 99.90th=[60556], 99.95th=[60556], 00:37:14.029 | 99.99th=[60556] 00:37:14.029 bw ( KiB/s): min= 1763, max= 1976, per=4.01%, avg=1891.32, stdev=57.28, samples=19 00:37:14.029 iops : min= 440, max= 494, avg=472.79, stdev=14.41, samples=19 00:37:14.029 lat (msec) : 20=1.56%, 50=96.89%, 100=1.54% 00:37:14.029 cpu : usr=95.91%, sys=2.11%, ctx=75, majf=0, minf=83 00:37:14.029 IO depths : 1=0.9%, 2=1.8%, 4=8.9%, 8=74.1%, 16=14.3%, 32=0.0%, >=64=0.0% 00:37:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 complete : 0=0.0%, 4=90.6%, 8=6.4%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 issued rwts: total=4733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.029 filename2: (groupid=0, jobs=1): err= 0: pid=2869420: Sun Jun 9 09:15:34 2024 00:37:14.029 read: IOPS=502, BW=2011KiB/s (2059kB/s)(19.7MiB/10025msec) 00:37:14.029 slat (nsec): min=5857, max=56685, avg=10462.45, stdev=6456.50 00:37:14.029 clat (usec): min=11475, max=35821, avg=31735.11, stdev=1741.45 00:37:14.029 lat (usec): min=11482, max=35840, avg=31745.58, stdev=1741.58 00:37:14.029 clat percentiles (usec): 00:37:14.029 | 1.00th=[22938], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:37:14.029 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:14.029 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:37:14.029 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[35914], 00:37:14.029 | 99.99th=[35914] 00:37:14.029 bw ( KiB/s): min= 1920, max= 2176, per=4.26%, avg=2009.60, stdev=73.12, samples=20 00:37:14.029 iops : min= 480, max= 544, avg=502.40, stdev=18.28, samples=20 00:37:14.029 lat (msec) : 20=0.46%, 50=99.54% 00:37:14.029 cpu : usr=99.01%, sys=0.68%, ctx=33, majf=0, minf=67 00:37:14.029 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.029 filename2: (groupid=0, jobs=1): err= 0: pid=2869421: Sun Jun 9 09:15:34 2024 00:37:14.029 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10004msec) 00:37:14.029 slat (nsec): min=5817, max=50171, avg=14739.07, stdev=7877.95 00:37:14.029 clat (usec): min=3524, max=73270, avg=31865.66, stdev=2618.79 00:37:14.029 lat (usec): min=3530, max=73292, avg=31880.39, stdev=2618.93 00:37:14.029 clat percentiles (usec): 00:37:14.029 | 1.00th=[28705], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:37:14.029 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:14.029 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:37:14.029 | 99.00th=[34341], 99.50th=[34866], 99.90th=[57410], 99.95th=[57410], 00:37:14.029 | 99.99th=[72877] 00:37:14.029 bw ( KiB/s): min= 1792, max= 2048, per=4.21%, avg=1987.32, stdev=71.59, samples=19 00:37:14.029 iops : min= 448, max= 512, avg=496.79, stdev=17.87, samples=19 00:37:14.029 lat (msec) : 4=0.28%, 10=0.04%, 20=0.32%, 50=99.04%, 100=0.32% 00:37:14.029 cpu : usr=99.14%, sys=0.59%, ctx=12, majf=0, minf=66 00:37:14.029 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:37:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.029 filename2: (groupid=0, jobs=1): err= 0: pid=2869422: Sun Jun 9 09:15:34 2024 00:37:14.029 read: IOPS=463, BW=1852KiB/s (1897kB/s)(18.1MiB/10004msec) 00:37:14.029 slat (nsec): min=5833, max=72345, avg=14090.36, stdev=9971.56 00:37:14.029 clat (usec): min=9059, max=64160, avg=34472.59, stdev=6766.71 00:37:14.029 lat (usec): min=9065, max=64179, avg=34486.68, stdev=6765.50 00:37:14.029 clat percentiles (usec): 00:37:14.029 | 1.00th=[18744], 5.00th=[24773], 10.00th=[30016], 20.00th=[31327], 00:37:14.029 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:37:14.029 | 70.00th=[33817], 80.00th=[40633], 90.00th=[44303], 95.00th=[47449], 00:37:14.029 | 99.00th=[53740], 99.50th=[57934], 99.90th=[64226], 99.95th=[64226], 00:37:14.029 | 99.99th=[64226] 00:37:14.029 bw ( KiB/s): min= 1536, max= 1944, per=3.90%, avg=1842.26, stdev=92.46, samples=19 00:37:14.029 iops : min= 384, max= 486, avg=460.53, stdev=23.15, samples=19 00:37:14.029 lat (msec) : 10=0.06%, 20=1.77%, 50=95.90%, 100=2.27% 00:37:14.029 cpu : usr=98.83%, sys=0.83%, ctx=52, majf=0, minf=87 00:37:14.029 IO depths : 1=1.0%, 2=2.0%, 4=9.9%, 8=73.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:37:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 complete : 0=0.0%, 4=90.7%, 8=6.0%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.029 issued rwts: total=4632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.029 00:37:14.029 Run status group 0 (all jobs): 00:37:14.029 READ: bw=46.1MiB/s (48.3MB/s), 1837KiB/s-2140KiB/s (1882kB/s-2191kB/s), io=464MiB (486MB), run=10002-10059msec 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.029 09:15:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.029 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.030 bdev_null0 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.030 [2024-06-09 09:15:35.062152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.030 bdev_null1 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:14.030 { 00:37:14.030 "params": { 00:37:14.030 "name": "Nvme$subsystem", 00:37:14.030 "trtype": "$TEST_TRANSPORT", 00:37:14.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:14.030 "adrfam": "ipv4", 00:37:14.030 "trsvcid": "$NVMF_PORT", 00:37:14.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:14.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:14.030 "hdgst": ${hdgst:-false}, 00:37:14.030 "ddgst": ${ddgst:-false} 00:37:14.030 }, 00:37:14.030 "method": "bdev_nvme_attach_controller" 00:37:14.030 } 00:37:14.030 EOF 00:37:14.030 )") 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:14.030 { 00:37:14.030 "params": { 00:37:14.030 "name": "Nvme$subsystem", 00:37:14.030 "trtype": "$TEST_TRANSPORT", 00:37:14.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:14.030 "adrfam": "ipv4", 00:37:14.030 "trsvcid": "$NVMF_PORT", 00:37:14.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:14.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:14.030 "hdgst": ${hdgst:-false}, 00:37:14.030 "ddgst": ${ddgst:-false} 00:37:14.030 }, 00:37:14.030 "method": "bdev_nvme_attach_controller" 00:37:14.030 } 00:37:14.030 EOF 00:37:14.030 )") 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:14.030 "params": { 00:37:14.030 "name": "Nvme0", 00:37:14.030 "trtype": "tcp", 00:37:14.030 "traddr": "10.0.0.2", 00:37:14.030 "adrfam": "ipv4", 00:37:14.030 "trsvcid": "4420", 00:37:14.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:14.030 "hdgst": false, 00:37:14.030 "ddgst": false 00:37:14.030 }, 00:37:14.030 "method": "bdev_nvme_attach_controller" 00:37:14.030 },{ 00:37:14.030 "params": { 00:37:14.030 "name": "Nvme1", 00:37:14.030 "trtype": "tcp", 00:37:14.030 "traddr": "10.0.0.2", 00:37:14.030 "adrfam": "ipv4", 00:37:14.030 "trsvcid": "4420", 00:37:14.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:14.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:14.030 "hdgst": false, 00:37:14.030 "ddgst": false 00:37:14.030 }, 00:37:14.030 "method": "bdev_nvme_attach_controller" 00:37:14.030 }' 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:14.030 09:15:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.030 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:14.030 ... 00:37:14.030 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:14.030 ... 00:37:14.030 fio-3.35 00:37:14.030 Starting 4 threads 00:37:14.030 EAL: No free 2048 kB hugepages reported on node 1 00:37:19.322 00:37:19.322 filename0: (groupid=0, jobs=1): err= 0: pid=2871637: Sun Jun 9 09:15:41 2024 00:37:19.322 read: IOPS=1991, BW=15.6MiB/s (16.3MB/s)(77.8MiB/5003msec) 00:37:19.322 slat (nsec): min=5660, max=50455, avg=8492.35, stdev=3438.42 00:37:19.322 clat (usec): min=1824, max=7559, avg=3993.75, stdev=603.75 00:37:19.322 lat (usec): min=1830, max=7585, avg=4002.24, stdev=604.40 00:37:19.322 clat percentiles (usec): 00:37:19.322 | 1.00th=[ 2769], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3589], 00:37:19.322 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3982], 00:37:19.322 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5211], 00:37:19.322 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 7177], 99.95th=[ 7177], 00:37:19.322 | 99.99th=[ 7570] 00:37:19.322 bw ( KiB/s): min=14816, max=16896, per=24.23%, avg=15931.20, stdev=788.51, samples=10 00:37:19.322 iops : min= 1852, max= 2112, avg=1991.40, stdev=98.56, samples=10 00:37:19.322 lat (msec) : 2=0.05%, 4=61.19%, 10=38.76% 00:37:19.322 cpu : usr=96.64%, sys=3.02%, ctx=12, majf=0, minf=10 00:37:19.322 IO depths : 1=0.1%, 2=0.8%, 4=70.3%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:19.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.322 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.322 issued rwts: total=9962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:19.322 filename0: (groupid=0, jobs=1): err= 0: pid=2871638: Sun Jun 9 09:15:41 2024 00:37:19.322 read: IOPS=2003, BW=15.6MiB/s (16.4MB/s)(78.3MiB/5003msec) 00:37:19.322 slat (nsec): min=8253, max=63526, avg=9210.74, stdev=2870.44 00:37:19.322 clat (usec): min=1612, max=7686, avg=3968.55, stdev=624.62 00:37:19.322 lat (usec): min=1647, max=7695, avg=3977.76, stdev=624.56 00:37:19.322 clat percentiles (usec): 00:37:19.322 | 1.00th=[ 2638], 5.00th=[ 3064], 10.00th=[ 3294], 20.00th=[ 3556], 00:37:19.322 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3949], 00:37:19.322 | 70.00th=[ 4146], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5211], 00:37:19.322 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[ 6652], 00:37:19.322 | 99.99th=[ 7701] 00:37:19.322 bw ( KiB/s): min=15024, max=16976, per=24.37%, avg=16022.40, stdev=767.64, samples=10 00:37:19.322 iops : min= 1878, max= 2122, avg=2002.80, stdev=95.95, samples=10 00:37:19.322 lat (msec) : 2=0.05%, 4=63.72%, 10=36.23% 00:37:19.322 cpu : usr=95.72%, sys=3.16%, ctx=6, majf=0, minf=9 00:37:19.322 IO depths : 1=0.2%, 2=1.2%, 4=69.7%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:19.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.322 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.322 issued rwts: total=10022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:19.322 filename1: (groupid=0, jobs=1): err= 0: pid=2871639: Sun Jun 9 09:15:41 2024 00:37:19.322 read: IOPS=1964, BW=15.4MiB/s (16.1MB/s)(77.4MiB/5042msec) 00:37:19.322 slat (nsec): min=8249, max=39094, avg=9212.27, stdev=2489.13 00:37:19.322 clat (usec): min=1947, max=44295, avg=4032.14, stdev=989.34 00:37:19.322 lat (usec): min=1956, max=44304, avg=4041.35, stdev=989.31 00:37:19.322 clat percentiles (usec): 00:37:19.322 | 1.00th=[ 2802], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 3654], 00:37:19.322 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3982], 00:37:19.322 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5276], 00:37:19.322 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 6980], 99.95th=[ 7898], 00:37:19.322 | 99.99th=[44303] 00:37:19.322 bw ( KiB/s): min=14909, max=17008, per=24.10%, avg=15847.70, stdev=813.34, samples=10 00:37:19.322 iops : min= 1863, max= 2126, avg=1980.90, stdev=101.75, samples=10 00:37:19.322 lat (msec) : 2=0.01%, 4=60.88%, 10=39.07%, 50=0.04% 00:37:19.322 cpu : usr=96.27%, sys=3.25%, ctx=154, majf=0, minf=9 00:37:19.322 IO depths : 1=0.1%, 2=1.0%, 4=66.4%, 8=32.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:19.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.322 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.322 issued rwts: total=9907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:19.322 filename1: (groupid=0, jobs=1): err= 0: pid=2871640: Sun Jun 9 09:15:41 2024 00:37:19.322 read: IOPS=2292, BW=17.9MiB/s (18.8MB/s)(90.3MiB/5043msec) 00:37:19.322 slat (nsec): min=5667, max=39820, avg=8602.13, stdev=2345.09 00:37:19.322 clat (usec): min=1267, max=44853, avg=3455.80, stdev=1025.36 00:37:19.322 lat (usec): min=1276, max=44859, avg=3464.41, stdev=1025.25 00:37:19.322 clat percentiles (usec): 00:37:19.322 | 1.00th=[ 2073], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2900], 00:37:19.322 | 30.00th=[ 3097], 40.00th=[ 3326], 50.00th=[ 3523], 60.00th=[ 3720], 00:37:19.322 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4113], 95.00th=[ 4359], 00:37:19.322 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5735], 99.95th=[ 5997], 00:37:19.322 | 99.99th=[44827] 00:37:19.322 bw ( KiB/s): min=16624, max=20320, per=28.12%, avg=18491.80, stdev=1663.75, samples=10 00:37:19.322 iops : min= 2078, max= 2540, avg=2311.40, stdev=207.93, samples=10 00:37:19.322 lat (msec) : 2=0.81%, 4=86.87%, 10=12.28%, 50=0.04% 00:37:19.322 cpu : usr=97.14%, sys=2.56%, ctx=7, majf=0, minf=0 00:37:19.322 IO depths : 1=0.4%, 2=2.7%, 4=66.6%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:19.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.322 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.322 issued rwts: total=11560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:19.322 00:37:19.322 Run status group 0 (all jobs): 00:37:19.322 READ: bw=64.2MiB/s (67.3MB/s), 15.4MiB/s-17.9MiB/s (16.1MB/s-18.8MB/s), io=324MiB (340MB), run=5003-5043msec 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.322 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.323 00:37:19.323 real 0m24.156s 00:37:19.323 user 5m18.563s 00:37:19.323 sys 0m4.314s 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:19.323 09:15:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.323 ************************************ 00:37:19.323 END TEST fio_dif_rand_params 00:37:19.323 ************************************ 00:37:19.323 09:15:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:19.323 09:15:41 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:19.323 09:15:41 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:19.323 09:15:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:19.323 ************************************ 00:37:19.323 START TEST fio_dif_digest 00:37:19.323 ************************************ 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:19.323 bdev_null0 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:19.323 [2024-06-09 09:15:41.398574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:19.323 { 00:37:19.323 "params": { 00:37:19.323 "name": "Nvme$subsystem", 00:37:19.323 "trtype": "$TEST_TRANSPORT", 00:37:19.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:19.323 "adrfam": "ipv4", 00:37:19.323 "trsvcid": "$NVMF_PORT", 00:37:19.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:19.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:19.323 "hdgst": ${hdgst:-false}, 00:37:19.323 "ddgst": ${ddgst:-false} 00:37:19.323 }, 00:37:19.323 "method": "bdev_nvme_attach_controller" 00:37:19.323 } 00:37:19.323 EOF 00:37:19.323 )") 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:19.323 "params": { 00:37:19.323 "name": "Nvme0", 00:37:19.323 "trtype": "tcp", 00:37:19.323 "traddr": "10.0.0.2", 00:37:19.323 "adrfam": "ipv4", 00:37:19.323 "trsvcid": "4420", 00:37:19.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.323 "hdgst": true, 00:37:19.323 "ddgst": true 00:37:19.323 }, 00:37:19.323 "method": "bdev_nvme_attach_controller" 00:37:19.323 }' 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:19.323 09:15:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.323 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:19.323 ... 00:37:19.323 fio-3.35 00:37:19.323 Starting 3 threads 00:37:19.323 EAL: No free 2048 kB hugepages reported on node 1 00:37:31.559 00:37:31.559 filename0: (groupid=0, jobs=1): err= 0: pid=2872907: Sun Jun 9 09:15:52 2024 00:37:31.559 read: IOPS=142, BW=17.8MiB/s (18.7MB/s)(179MiB/10016msec) 00:37:31.559 slat (nsec): min=6026, max=32583, avg=7832.23, stdev=1639.33 00:37:31.559 clat (usec): min=7202, max=98570, avg=21011.85, stdev=17249.48 00:37:31.559 lat (usec): min=7208, max=98577, avg=21019.69, stdev=17249.54 00:37:31.559 clat percentiles (usec): 00:37:31.559 | 1.00th=[ 8094], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10945], 00:37:31.559 | 30.00th=[12125], 40.00th=[13042], 50.00th=[13829], 60.00th=[14877], 00:37:31.559 | 70.00th=[16057], 80.00th=[17957], 90.00th=[55313], 95.00th=[56886], 00:37:31.559 | 99.00th=[58983], 99.50th=[60031], 99.90th=[98042], 99.95th=[99091], 00:37:31.559 | 99.99th=[99091] 00:37:31.559 bw ( KiB/s): min=12032, max=25600, per=30.80%, avg=18252.80, stdev=4284.58, samples=20 00:37:31.559 iops : min= 94, max= 200, avg=142.60, stdev=33.47, samples=20 00:37:31.559 lat (msec) : 10=11.97%, 20=69.84%, 50=0.07%, 100=18.12% 00:37:31.559 cpu : usr=96.52%, sys=3.21%, ctx=34, majf=0, minf=210 00:37:31.559 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.559 issued rwts: total=1429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.559 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:31.559 filename0: (groupid=0, jobs=1): err= 0: pid=2872908: Sun Jun 9 09:15:52 2024 00:37:31.559 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(247MiB/10053msec) 00:37:31.559 slat (nsec): min=6024, max=32118, avg=6649.22, stdev=901.19 00:37:31.559 clat (usec): min=5680, max=95481, avg=15244.74, stdev=13407.18 00:37:31.559 lat (usec): min=5687, max=95488, avg=15251.39, stdev=13407.18 00:37:31.559 clat percentiles (usec): 00:37:31.559 | 1.00th=[ 6652], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 9110], 00:37:31.559 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11469], 60.00th=[12256], 00:37:31.559 | 70.00th=[12911], 80.00th=[13960], 90.00th=[16450], 95.00th=[54264], 00:37:31.559 | 99.00th=[56361], 99.50th=[56886], 99.90th=[94897], 99.95th=[95945], 00:37:31.559 | 99.99th=[95945] 00:37:31.559 bw ( KiB/s): min=16128, max=34048, per=42.64%, avg=25267.20, stdev=5183.67, samples=20 00:37:31.559 iops : min= 126, max= 266, avg=197.40, stdev=40.50, samples=20 00:37:31.559 lat (msec) : 10=32.42%, 20=58.22%, 100=9.36% 00:37:31.559 cpu : usr=96.07%, sys=3.62%, ctx=14, majf=0, minf=156 00:37:31.559 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.559 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.559 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:31.559 filename0: (groupid=0, jobs=1): err= 0: pid=2872909: Sun Jun 9 09:15:52 2024 00:37:31.559 read: IOPS=124, BW=15.6MiB/s (16.3MB/s)(156MiB/10007msec) 00:37:31.559 slat (nsec): min=6025, max=31531, avg=8172.52, stdev=1665.44 00:37:31.559 clat (usec): min=7973, max=99516, avg=24039.10, stdev=18868.76 00:37:31.559 lat (usec): min=7979, max=99523, avg=24047.27, stdev=18868.78 00:37:31.559 clat percentiles (usec): 00:37:31.559 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[11469], 20.00th=[12649], 00:37:31.559 | 30.00th=[13566], 40.00th=[14484], 50.00th=[15270], 60.00th=[16319], 00:37:31.559 | 70.00th=[17433], 80.00th=[53216], 90.00th=[55837], 95.00th=[57934], 00:37:31.559 | 99.00th=[94897], 99.50th=[96994], 99.90th=[98042], 99.95th=[99091], 00:37:31.559 | 99.99th=[99091] 00:37:31.559 bw ( KiB/s): min=11520, max=20480, per=26.92%, avg=15950.40, stdev=2423.22, samples=20 00:37:31.559 iops : min= 90, max= 160, avg=124.60, stdev=18.93, samples=20 00:37:31.559 lat (msec) : 10=3.53%, 20=73.56%, 50=0.64%, 100=22.28% 00:37:31.559 cpu : usr=96.94%, sys=2.79%, ctx=18, majf=0, minf=141 00:37:31.559 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.559 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.559 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:31.559 00:37:31.559 Run status group 0 (all jobs): 00:37:31.559 READ: bw=57.9MiB/s (60.7MB/s), 15.6MiB/s-24.6MiB/s (16.3MB/s-25.8MB/s), io=582MiB (610MB), run=10007-10053msec 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:31.559 00:37:31.559 real 0m11.035s 00:37:31.559 user 0m44.044s 00:37:31.559 sys 0m1.303s 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:31.559 09:15:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.559 ************************************ 00:37:31.559 END TEST fio_dif_digest 00:37:31.559 ************************************ 00:37:31.559 09:15:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:31.559 09:15:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:31.559 rmmod nvme_tcp 00:37:31.559 rmmod nvme_fabrics 00:37:31.559 rmmod nvme_keyring 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2862122 ']' 00:37:31.559 09:15:52 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2862122 00:37:31.559 09:15:52 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 2862122 ']' 00:37:31.559 09:15:52 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 2862122 00:37:31.559 09:15:52 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:37:31.559 09:15:52 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:31.559 09:15:52 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2862122 00:37:31.560 09:15:52 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:31.560 09:15:52 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:31.560 09:15:52 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2862122' 00:37:31.560 killing process with pid 2862122 00:37:31.560 09:15:52 nvmf_dif -- common/autotest_common.sh@968 -- # kill 2862122 00:37:31.560 09:15:52 nvmf_dif -- common/autotest_common.sh@973 -- # wait 2862122 00:37:31.560 09:15:52 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:31.560 09:15:52 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:33.475 Waiting for block devices as requested 00:37:33.475 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:33.475 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:33.735 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:33.736 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:33.736 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:33.736 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:33.997 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:33.998 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:33.998 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:34.269 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:34.269 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:34.578 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:34.578 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:34.578 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:34.578 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:34.839 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:34.839 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:35.100 09:15:57 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:35.100 09:15:57 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:35.100 09:15:57 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:35.100 09:15:57 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:35.100 09:15:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.100 09:15:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:35.100 09:15:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:37.648 09:15:59 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:37.648 00:37:37.648 real 1m16.548s 00:37:37.648 user 8m0.728s 00:37:37.648 sys 0m19.118s 00:37:37.648 09:15:59 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:37.648 09:15:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:37.648 ************************************ 00:37:37.648 END TEST nvmf_dif 00:37:37.648 ************************************ 00:37:37.648 09:15:59 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:37.648 09:15:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:37.648 09:15:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:37.648 09:15:59 -- common/autotest_common.sh@10 -- # set +x 00:37:37.648 ************************************ 00:37:37.648 START TEST nvmf_abort_qd_sizes 00:37:37.648 ************************************ 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:37.648 * Looking for test storage... 00:37:37.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:37:37.648 09:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:44.237 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:44.238 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:44.238 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:44.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:44.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:44.238 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:44.499 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:44.499 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:44.499 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:44.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:44.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:37:44.499 00:37:44.499 --- 10.0.0.2 ping statistics --- 00:37:44.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.499 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:37:44.499 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:44.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:44.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.411 ms 00:37:44.499 00:37:44.499 --- 10.0.0.1 ping statistics --- 00:37:44.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.499 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:37:44.499 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:44.499 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:37:44.499 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:44.499 09:16:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:47.799 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:47.799 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:48.060 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.060 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:48.060 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:48.060 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.060 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:48.060 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2882247 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2882247 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 2882247 ']' 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:48.321 09:16:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.321 [2024-06-09 09:16:10.708529] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:37:48.321 [2024-06-09 09:16:10.708576] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:48.321 EAL: No free 2048 kB hugepages reported on node 1 00:37:48.321 [2024-06-09 09:16:10.771867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:48.321 [2024-06-09 09:16:10.837851] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:48.321 [2024-06-09 09:16:10.837886] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:48.321 [2024-06-09 09:16:10.837893] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:48.321 [2024-06-09 09:16:10.837900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:48.321 [2024-06-09 09:16:10.837906] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:48.321 [2024-06-09 09:16:10.838039] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.321 [2024-06-09 09:16:10.838158] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:48.321 [2024-06-09 09:16:10.838312] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.321 [2024-06-09 09:16:10.838314] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:49.263 09:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:49.264 09:16:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.264 ************************************ 00:37:49.264 START TEST spdk_target_abort 00:37:49.264 ************************************ 00:37:49.264 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:37:49.264 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:49.264 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:49.264 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:49.264 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.525 spdk_targetn1 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.525 [2024-06-09 09:16:11.877390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.525 [2024-06-09 09:16:11.917667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:49.525 09:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:49.525 EAL: No free 2048 kB hugepages reported on node 1 00:37:49.786 [2024-06-09 09:16:12.160373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:384 len:8 PRP1 0x2000078be000 PRP2 0x0 00:37:49.786 [2024-06-09 09:16:12.160399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0031 p:1 m:0 dnr:0 00:37:49.786 [2024-06-09 09:16:12.169315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:608 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:37:49.786 [2024-06-09 09:16:12.169334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:37:49.786 [2024-06-09 09:16:12.252867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2456 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:37:49.786 [2024-06-09 09:16:12.252885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:49.786 [2024-06-09 09:16:12.279519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3208 len:8 PRP1 0x2000078be000 PRP2 0x0 00:37:49.786 [2024-06-09 09:16:12.279535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0092 p:0 m:0 dnr:0 00:37:49.786 [2024-06-09 09:16:12.287925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3384 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:37:49.786 [2024-06-09 09:16:12.287939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a8 p:0 m:0 dnr:0 00:37:49.786 [2024-06-09 09:16:12.302421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3744 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:37:49.786 [2024-06-09 09:16:12.302436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d5 p:0 m:0 dnr:0 00:37:49.786 [2024-06-09 09:16:12.303137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3768 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:37:49.786 [2024-06-09 09:16:12.303147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00d9 p:0 m:0 dnr:0 00:37:49.786 [2024-06-09 09:16:12.309791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3880 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:37:49.786 [2024-06-09 09:16:12.309804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00e7 p:0 m:0 dnr:0 00:37:53.086 Initializing NVMe Controllers 00:37:53.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:53.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:53.086 Initialization complete. Launching workers. 00:37:53.086 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8856, failed: 8 00:37:53.086 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 4559, failed to submit 4305 00:37:53.086 success 645, unsuccess 3914, failed 0 00:37:53.086 09:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:53.086 09:16:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:53.086 EAL: No free 2048 kB hugepages reported on node 1 00:37:53.086 [2024-06-09 09:16:15.314557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:672 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:37:53.086 [2024-06-09 09:16:15.314596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:37:53.659 [2024-06-09 09:16:16.076618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:18576 len:8 PRP1 0x200007c48000 PRP2 0x0 00:37:53.659 [2024-06-09 09:16:16.076649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:56.205 Initializing NVMe Controllers 00:37:56.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:56.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:56.205 Initialization complete. Launching workers. 00:37:56.205 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8699, failed: 2 00:37:56.205 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1278, failed to submit 7423 00:37:56.205 success 319, unsuccess 959, failed 0 00:37:56.205 09:16:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:56.205 09:16:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:56.205 EAL: No free 2048 kB hugepages reported on node 1 00:37:56.205 [2024-06-09 09:16:18.579480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:174 nsid:1 lba:1848 len:8 PRP1 0x2000078ca000 PRP2 0x0 00:37:56.205 [2024-06-09 09:16:18.579507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:174 cdw0:0 sqhd:0099 p:0 m:0 dnr:0 00:37:58.805 [2024-06-09 09:16:20.952456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:174 nsid:1 lba:267328 len:8 PRP1 0x2000078ca000 PRP2 0x0 00:37:58.805 [2024-06-09 09:16:20.952483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:174 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:59.065 Initializing NVMe Controllers 00:37:59.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:59.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:59.065 Initialization complete. Launching workers. 00:37:59.065 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42103, failed: 2 00:37:59.065 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2593, failed to submit 39512 00:37:59.065 success 613, unsuccess 1980, failed 0 00:37:59.065 09:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:59.065 09:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.066 09:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.066 09:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.066 09:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:59.066 09:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.066 09:16:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2882247 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 2882247 ']' 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 2882247 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2882247 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2882247' 00:38:00.980 killing process with pid 2882247 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 2882247 00:38:00.980 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 2882247 00:38:01.241 00:38:01.241 real 0m12.062s 00:38:01.241 user 0m48.102s 00:38:01.241 sys 0m2.313s 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.241 ************************************ 00:38:01.241 END TEST spdk_target_abort 00:38:01.241 ************************************ 00:38:01.241 09:16:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:01.241 09:16:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:01.241 09:16:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:01.241 09:16:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:01.241 ************************************ 00:38:01.241 START TEST kernel_target_abort 00:38:01.241 ************************************ 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:01.241 09:16:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:04.543 Waiting for block devices as requested 00:38:04.543 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:04.543 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:04.543 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:04.803 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:04.803 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:04.803 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:05.064 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:05.064 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:05.064 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:05.324 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:05.324 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:05.324 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:05.585 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:05.585 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:05.585 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:05.585 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:05.845 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:06.106 No valid GPT data, bailing 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:06.106 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:06.107 00:38:06.107 Discovery Log Number of Records 2, Generation counter 2 00:38:06.107 =====Discovery Log Entry 0====== 00:38:06.107 trtype: tcp 00:38:06.107 adrfam: ipv4 00:38:06.107 subtype: current discovery subsystem 00:38:06.107 treq: not specified, sq flow control disable supported 00:38:06.107 portid: 1 00:38:06.107 trsvcid: 4420 00:38:06.107 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:06.107 traddr: 10.0.0.1 00:38:06.107 eflags: none 00:38:06.107 sectype: none 00:38:06.107 =====Discovery Log Entry 1====== 00:38:06.107 trtype: tcp 00:38:06.107 adrfam: ipv4 00:38:06.107 subtype: nvme subsystem 00:38:06.107 treq: not specified, sq flow control disable supported 00:38:06.107 portid: 1 00:38:06.107 trsvcid: 4420 00:38:06.107 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:06.107 traddr: 10.0.0.1 00:38:06.107 eflags: none 00:38:06.107 sectype: none 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:06.107 09:16:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:06.367 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.670 Initializing NVMe Controllers 00:38:09.670 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:09.670 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:09.670 Initialization complete. Launching workers. 00:38:09.670 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37180, failed: 0 00:38:09.670 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37180, failed to submit 0 00:38:09.670 success 0, unsuccess 37180, failed 0 00:38:09.670 09:16:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:09.670 09:16:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.670 EAL: No free 2048 kB hugepages reported on node 1 00:38:12.971 Initializing NVMe Controllers 00:38:12.971 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:12.971 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:12.971 Initialization complete. Launching workers. 00:38:12.971 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76291, failed: 0 00:38:12.971 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19206, failed to submit 57085 00:38:12.971 success 0, unsuccess 19206, failed 0 00:38:12.971 09:16:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:12.971 09:16:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:12.971 EAL: No free 2048 kB hugepages reported on node 1 00:38:15.516 Initializing NVMe Controllers 00:38:15.516 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:15.516 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:15.516 Initialization complete. Launching workers. 00:38:15.516 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73905, failed: 0 00:38:15.516 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18466, failed to submit 55439 00:38:15.516 success 0, unsuccess 18466, failed 0 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:15.516 09:16:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:15.516 09:16:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:18.815 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:18.815 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:20.765 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:21.031 00:38:21.031 real 0m19.658s 00:38:21.031 user 0m6.862s 00:38:21.031 sys 0m6.454s 00:38:21.031 09:16:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:21.031 09:16:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:21.031 ************************************ 00:38:21.031 END TEST kernel_target_abort 00:38:21.031 ************************************ 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:21.031 rmmod nvme_tcp 00:38:21.031 rmmod nvme_fabrics 00:38:21.031 rmmod nvme_keyring 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2882247 ']' 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2882247 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 2882247 ']' 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 2882247 00:38:21.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2882247) - No such process 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 2882247 is not found' 00:38:21.031 Process with pid 2882247 is not found 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:21.031 09:16:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:24.334 Waiting for block devices as requested 00:38:24.334 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:24.334 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:24.595 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:24.595 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:24.595 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:24.857 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:24.857 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:24.857 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:24.857 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:25.119 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:25.119 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:25.380 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:25.380 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:25.380 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:25.380 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:25.640 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:25.640 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:25.900 09:16:48 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:25.900 09:16:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:25.900 09:16:48 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:25.900 09:16:48 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:25.900 09:16:48 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.900 09:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:25.900 09:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.813 09:16:50 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:27.813 00:38:27.813 real 0m50.681s 00:38:27.813 user 1m0.216s 00:38:27.813 sys 0m19.160s 00:38:27.813 09:16:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:27.813 09:16:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:27.813 ************************************ 00:38:27.813 END TEST nvmf_abort_qd_sizes 00:38:27.813 ************************************ 00:38:28.074 09:16:50 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:28.074 09:16:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:28.074 09:16:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:28.074 09:16:50 -- common/autotest_common.sh@10 -- # set +x 00:38:28.074 ************************************ 00:38:28.074 START TEST keyring_file 00:38:28.074 ************************************ 00:38:28.074 09:16:50 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:28.074 * Looking for test storage... 00:38:28.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.074 09:16:50 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.074 09:16:50 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.074 09:16:50 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.074 09:16:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.074 09:16:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.074 09:16:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.074 09:16:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:28.074 09:16:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@47 -- # : 0 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QkEZJJrohe 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QkEZJJrohe 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QkEZJJrohe 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.QkEZJJrohe 00:38:28.074 09:16:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.laAR0DqHlR 00:38:28.074 09:16:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:28.074 09:16:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:28.075 09:16:50 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:28.075 09:16:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:28.075 09:16:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:28.335 09:16:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.laAR0DqHlR 00:38:28.335 09:16:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.laAR0DqHlR 00:38:28.335 09:16:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.laAR0DqHlR 00:38:28.335 09:16:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=2892230 00:38:28.335 09:16:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2892230 00:38:28.335 09:16:50 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:28.335 09:16:50 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2892230 ']' 00:38:28.335 09:16:50 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.335 09:16:50 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:28.335 09:16:50 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.335 09:16:50 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:28.335 09:16:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:28.335 [2024-06-09 09:16:50.722969] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:28.335 [2024-06-09 09:16:50.723023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892230 ] 00:38:28.335 EAL: No free 2048 kB hugepages reported on node 1 00:38:28.335 [2024-06-09 09:16:50.780492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.335 [2024-06-09 09:16:50.844715] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:38:29.275 09:16:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:29.275 [2024-06-09 09:16:51.479873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.275 null0 00:38:29.275 [2024-06-09 09:16:51.511916] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:29.275 [2024-06-09 09:16:51.512174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:29.275 [2024-06-09 09:16:51.519929] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.275 09:16:51 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.275 09:16:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:29.275 [2024-06-09 09:16:51.535969] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:29.275 request: 00:38:29.276 { 00:38:29.276 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:29.276 "secure_channel": false, 00:38:29.276 "listen_address": { 00:38:29.276 "trtype": "tcp", 00:38:29.276 "traddr": "127.0.0.1", 00:38:29.276 "trsvcid": "4420" 00:38:29.276 }, 00:38:29.276 "method": "nvmf_subsystem_add_listener", 00:38:29.276 "req_id": 1 00:38:29.276 } 00:38:29.276 Got JSON-RPC error response 00:38:29.276 response: 00:38:29.276 { 00:38:29.276 "code": -32602, 00:38:29.276 "message": "Invalid parameters" 00:38:29.276 } 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:29.276 09:16:51 keyring_file -- keyring/file.sh@46 -- # bperfpid=2892447 00:38:29.276 09:16:51 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2892447 /var/tmp/bperf.sock 00:38:29.276 09:16:51 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2892447 ']' 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:29.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:29.276 09:16:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:29.276 [2024-06-09 09:16:51.588239] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:29.276 [2024-06-09 09:16:51.588284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892447 ] 00:38:29.276 EAL: No free 2048 kB hugepages reported on node 1 00:38:29.276 [2024-06-09 09:16:51.663754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.276 [2024-06-09 09:16:51.727902] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.846 09:16:52 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:29.846 09:16:52 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:38:29.846 09:16:52 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QkEZJJrohe 00:38:29.846 09:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QkEZJJrohe 00:38:30.106 09:16:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.laAR0DqHlR 00:38:30.106 09:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.laAR0DqHlR 00:38:30.106 09:16:52 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:38:30.106 09:16:52 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:38:30.106 09:16:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.106 09:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.106 09:16:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:30.367 09:16:52 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.QkEZJJrohe == \/\t\m\p\/\t\m\p\.\Q\k\E\Z\J\J\r\o\h\e ]] 00:38:30.367 09:16:52 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:38:30.367 09:16:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:30.367 09:16:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.367 09:16:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:30.367 09:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.627 09:16:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.laAR0DqHlR == \/\t\m\p\/\t\m\p\.\l\a\A\R\0\D\q\H\l\R ]] 00:38:30.627 09:16:52 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:38:30.627 09:16:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:30.627 09:16:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:30.627 09:16:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.627 09:16:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:30.627 09:16:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.627 09:16:53 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:38:30.627 09:16:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:38:30.627 09:16:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:30.627 09:16:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:30.627 09:16:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.627 09:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.627 09:16:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:30.887 09:16:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:30.887 09:16:53 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.887 09:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:31.147 [2024-06-09 09:16:53.452254] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:31.147 nvme0n1 00:38:31.147 09:16:53 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:38:31.147 09:16:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:31.148 09:16:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:31.148 09:16:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.148 09:16:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:31.148 09:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.408 09:16:53 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:38:31.408 09:16:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:38:31.408 09:16:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:31.408 09:16:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:31.408 09:16:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.408 09:16:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:31.408 09:16:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.408 09:16:53 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:38:31.408 09:16:53 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:31.408 Running I/O for 1 seconds... 00:38:32.792 00:38:32.792 Latency(us) 00:38:32.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.792 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:32.792 nvme0n1 : 1.03 2880.52 11.25 0.00 0.00 43886.75 5870.93 149422.08 00:38:32.792 =================================================================================================================== 00:38:32.792 Total : 2880.52 11.25 0.00 0.00 43886.75 5870.93 149422.08 00:38:32.792 0 00:38:32.792 09:16:54 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:32.792 09:16:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:32.792 09:16:55 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.792 09:16:55 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:38:32.792 09:16:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.792 09:16:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:33.053 09:16:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:33.053 09:16:55 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:33.053 09:16:55 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:38:33.053 09:16:55 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:33.053 09:16:55 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:38:33.053 09:16:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:33.054 09:16:55 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:38:33.054 09:16:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:33.054 09:16:55 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:33.054 09:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:33.315 [2024-06-09 09:16:55.638364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:33.315 [2024-06-09 09:16:55.639000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cab520 (107): Transport endpoint is not connected 00:38:33.315 [2024-06-09 09:16:55.639996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cab520 (9): Bad file descriptor 00:38:33.315 [2024-06-09 09:16:55.640997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:33.315 [2024-06-09 09:16:55.641003] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:33.315 [2024-06-09 09:16:55.641008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:33.315 request: 00:38:33.315 { 00:38:33.315 "name": "nvme0", 00:38:33.315 "trtype": "tcp", 00:38:33.315 "traddr": "127.0.0.1", 00:38:33.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:33.315 "adrfam": "ipv4", 00:38:33.315 "trsvcid": "4420", 00:38:33.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:33.315 "psk": "key1", 00:38:33.315 "method": "bdev_nvme_attach_controller", 00:38:33.315 "req_id": 1 00:38:33.315 } 00:38:33.315 Got JSON-RPC error response 00:38:33.315 response: 00:38:33.315 { 00:38:33.315 "code": -5, 00:38:33.315 "message": "Input/output error" 00:38:33.315 } 00:38:33.315 09:16:55 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:38:33.315 09:16:55 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:33.315 09:16:55 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:33.315 09:16:55 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:33.315 09:16:55 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.315 09:16:55 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:38:33.315 09:16:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:33.315 09:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.577 09:16:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:33.577 09:16:55 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:38:33.577 09:16:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:33.838 09:16:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:38:33.838 09:16:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:33.838 09:16:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:38:33.838 09:16:56 keyring_file -- keyring/file.sh@77 -- # jq length 00:38:33.838 09:16:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.100 09:16:56 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:38:34.100 09:16:56 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.QkEZJJrohe 00:38:34.100 09:16:56 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.QkEZJJrohe 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.QkEZJJrohe 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QkEZJJrohe 00:38:34.100 09:16:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QkEZJJrohe 00:38:34.100 [2024-06-09 09:16:56.593689] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QkEZJJrohe': 0100660 00:38:34.100 [2024-06-09 09:16:56.593706] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:34.100 request: 00:38:34.100 { 00:38:34.100 "name": "key0", 00:38:34.100 "path": "/tmp/tmp.QkEZJJrohe", 00:38:34.100 "method": "keyring_file_add_key", 00:38:34.100 "req_id": 1 00:38:34.100 } 00:38:34.100 Got JSON-RPC error response 00:38:34.100 response: 00:38:34.100 { 00:38:34.100 "code": -1, 00:38:34.100 "message": "Operation not permitted" 00:38:34.100 } 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:34.100 09:16:56 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:34.100 09:16:56 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.QkEZJJrohe 00:38:34.100 09:16:56 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QkEZJJrohe 00:38:34.100 09:16:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QkEZJJrohe 00:38:34.360 09:16:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.QkEZJJrohe 00:38:34.360 09:16:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:38:34.360 09:16:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:34.360 09:16:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:34.360 09:16:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:34.360 09:16:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:34.360 09:16:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.621 09:16:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:38:34.621 09:16:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:34.621 09:16:56 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:38:34.621 09:16:56 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:34.621 09:16:56 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:38:34.621 09:16:56 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:34.621 09:16:56 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:38:34.621 09:16:56 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:34.621 09:16:56 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:34.621 09:16:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:34.621 [2024-06-09 09:16:57.090938] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.QkEZJJrohe': No such file or directory 00:38:34.621 [2024-06-09 09:16:57.090956] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:34.621 [2024-06-09 09:16:57.090972] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:34.621 [2024-06-09 09:16:57.090976] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:34.621 [2024-06-09 09:16:57.090981] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:34.621 request: 00:38:34.621 { 00:38:34.621 "name": "nvme0", 00:38:34.621 "trtype": "tcp", 00:38:34.621 "traddr": "127.0.0.1", 00:38:34.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:34.621 "adrfam": "ipv4", 00:38:34.621 "trsvcid": "4420", 00:38:34.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:34.621 "psk": "key0", 00:38:34.621 "method": "bdev_nvme_attach_controller", 00:38:34.621 "req_id": 1 00:38:34.621 } 00:38:34.621 Got JSON-RPC error response 00:38:34.621 response: 00:38:34.621 { 00:38:34.621 "code": -19, 00:38:34.621 "message": "No such device" 00:38:34.621 } 00:38:34.621 09:16:57 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:38:34.621 09:16:57 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:34.621 09:16:57 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:34.621 09:16:57 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:34.621 09:16:57 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:38:34.621 09:16:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:34.882 09:16:57 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.j2bWCObPgh 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:34.882 09:16:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:34.882 09:16:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:34.882 09:16:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:34.882 09:16:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:34.882 09:16:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:34.882 09:16:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.j2bWCObPgh 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.j2bWCObPgh 00:38:34.882 09:16:57 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.j2bWCObPgh 00:38:34.882 09:16:57 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j2bWCObPgh 00:38:34.882 09:16:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j2bWCObPgh 00:38:35.144 09:16:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:35.144 09:16:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:35.144 nvme0n1 00:38:35.404 09:16:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:38:35.404 09:16:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:35.404 09:16:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.405 09:16:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.405 09:16:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:35.405 09:16:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.405 09:16:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:38:35.405 09:16:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:38:35.405 09:16:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:35.666 09:16:58 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:38:35.666 09:16:58 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:38:35.666 09:16:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.666 09:16:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:35.666 09:16:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.666 09:16:58 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:38:35.666 09:16:58 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:38:35.666 09:16:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:35.666 09:16:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.666 09:16:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.666 09:16:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.666 09:16:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:35.927 09:16:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:38:35.927 09:16:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:35.927 09:16:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:36.189 09:16:58 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:38:36.189 09:16:58 keyring_file -- keyring/file.sh@104 -- # jq length 00:38:36.189 09:16:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:36.189 09:16:58 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:38:36.189 09:16:58 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j2bWCObPgh 00:38:36.189 09:16:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j2bWCObPgh 00:38:36.450 09:16:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.laAR0DqHlR 00:38:36.450 09:16:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.laAR0DqHlR 00:38:36.450 09:16:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:36.450 09:16:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:36.711 nvme0n1 00:38:36.711 09:16:59 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:38:36.711 09:16:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:36.972 09:16:59 keyring_file -- keyring/file.sh@112 -- # config='{ 00:38:36.972 "subsystems": [ 00:38:36.972 { 00:38:36.972 "subsystem": "keyring", 00:38:36.972 "config": [ 00:38:36.972 { 00:38:36.972 "method": "keyring_file_add_key", 00:38:36.972 "params": { 00:38:36.972 "name": "key0", 00:38:36.972 "path": "/tmp/tmp.j2bWCObPgh" 00:38:36.972 } 00:38:36.972 }, 00:38:36.972 { 00:38:36.973 "method": "keyring_file_add_key", 00:38:36.973 "params": { 00:38:36.973 "name": "key1", 00:38:36.973 "path": "/tmp/tmp.laAR0DqHlR" 00:38:36.973 } 00:38:36.973 } 00:38:36.973 ] 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "subsystem": "iobuf", 00:38:36.973 "config": [ 00:38:36.973 { 00:38:36.973 "method": "iobuf_set_options", 00:38:36.973 "params": { 00:38:36.973 "small_pool_count": 8192, 00:38:36.973 "large_pool_count": 1024, 00:38:36.973 "small_bufsize": 8192, 00:38:36.973 "large_bufsize": 135168 00:38:36.973 } 00:38:36.973 } 00:38:36.973 ] 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "subsystem": "sock", 00:38:36.973 "config": [ 00:38:36.973 { 00:38:36.973 "method": "sock_set_default_impl", 00:38:36.973 "params": { 00:38:36.973 "impl_name": "posix" 00:38:36.973 } 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "method": "sock_impl_set_options", 00:38:36.973 "params": { 00:38:36.973 "impl_name": "ssl", 00:38:36.973 "recv_buf_size": 4096, 00:38:36.973 "send_buf_size": 4096, 00:38:36.973 "enable_recv_pipe": true, 00:38:36.973 "enable_quickack": false, 00:38:36.973 "enable_placement_id": 0, 00:38:36.973 "enable_zerocopy_send_server": true, 00:38:36.973 "enable_zerocopy_send_client": false, 00:38:36.973 "zerocopy_threshold": 0, 00:38:36.973 "tls_version": 0, 00:38:36.973 "enable_ktls": false 00:38:36.973 } 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "method": "sock_impl_set_options", 00:38:36.973 "params": { 00:38:36.973 "impl_name": "posix", 00:38:36.973 "recv_buf_size": 2097152, 00:38:36.973 "send_buf_size": 2097152, 00:38:36.973 "enable_recv_pipe": true, 00:38:36.973 "enable_quickack": false, 00:38:36.973 "enable_placement_id": 0, 00:38:36.973 "enable_zerocopy_send_server": true, 00:38:36.973 "enable_zerocopy_send_client": false, 00:38:36.973 "zerocopy_threshold": 0, 00:38:36.973 "tls_version": 0, 00:38:36.973 "enable_ktls": false 00:38:36.973 } 00:38:36.973 } 00:38:36.973 ] 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "subsystem": "vmd", 00:38:36.973 "config": [] 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "subsystem": "accel", 00:38:36.973 "config": [ 00:38:36.973 { 00:38:36.973 "method": "accel_set_options", 00:38:36.973 "params": { 00:38:36.973 "small_cache_size": 128, 00:38:36.973 "large_cache_size": 16, 00:38:36.973 "task_count": 2048, 00:38:36.973 "sequence_count": 2048, 00:38:36.973 "buf_count": 2048 00:38:36.973 } 00:38:36.973 } 00:38:36.973 ] 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "subsystem": "bdev", 00:38:36.973 "config": [ 00:38:36.973 { 00:38:36.973 "method": "bdev_set_options", 00:38:36.973 "params": { 00:38:36.973 "bdev_io_pool_size": 65535, 00:38:36.973 "bdev_io_cache_size": 256, 00:38:36.973 "bdev_auto_examine": true, 00:38:36.973 "iobuf_small_cache_size": 128, 00:38:36.973 "iobuf_large_cache_size": 16 00:38:36.973 } 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "method": "bdev_raid_set_options", 00:38:36.973 "params": { 00:38:36.973 "process_window_size_kb": 1024 00:38:36.973 } 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "method": "bdev_iscsi_set_options", 00:38:36.973 "params": { 00:38:36.973 "timeout_sec": 30 00:38:36.973 } 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "method": "bdev_nvme_set_options", 00:38:36.973 "params": { 00:38:36.973 "action_on_timeout": "none", 00:38:36.973 "timeout_us": 0, 00:38:36.973 "timeout_admin_us": 0, 00:38:36.973 "keep_alive_timeout_ms": 10000, 00:38:36.973 "arbitration_burst": 0, 00:38:36.973 "low_priority_weight": 0, 00:38:36.973 "medium_priority_weight": 0, 00:38:36.973 "high_priority_weight": 0, 00:38:36.973 "nvme_adminq_poll_period_us": 10000, 00:38:36.973 "nvme_ioq_poll_period_us": 0, 00:38:36.973 "io_queue_requests": 512, 00:38:36.973 "delay_cmd_submit": true, 00:38:36.973 "transport_retry_count": 4, 00:38:36.973 "bdev_retry_count": 3, 00:38:36.973 "transport_ack_timeout": 0, 00:38:36.973 "ctrlr_loss_timeout_sec": 0, 00:38:36.973 "reconnect_delay_sec": 0, 00:38:36.973 "fast_io_fail_timeout_sec": 0, 00:38:36.973 "disable_auto_failback": false, 00:38:36.973 "generate_uuids": false, 00:38:36.973 "transport_tos": 0, 00:38:36.973 "nvme_error_stat": false, 00:38:36.973 "rdma_srq_size": 0, 00:38:36.973 "io_path_stat": false, 00:38:36.973 "allow_accel_sequence": false, 00:38:36.973 "rdma_max_cq_size": 0, 00:38:36.973 "rdma_cm_event_timeout_ms": 0, 00:38:36.973 "dhchap_digests": [ 00:38:36.973 "sha256", 00:38:36.973 "sha384", 00:38:36.973 "sha512" 00:38:36.973 ], 00:38:36.973 "dhchap_dhgroups": [ 00:38:36.973 "null", 00:38:36.973 "ffdhe2048", 00:38:36.973 "ffdhe3072", 00:38:36.973 "ffdhe4096", 00:38:36.973 "ffdhe6144", 00:38:36.973 "ffdhe8192" 00:38:36.973 ] 00:38:36.973 } 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "method": "bdev_nvme_attach_controller", 00:38:36.973 "params": { 00:38:36.973 "name": "nvme0", 00:38:36.973 "trtype": "TCP", 00:38:36.973 "adrfam": "IPv4", 00:38:36.973 "traddr": "127.0.0.1", 00:38:36.973 "trsvcid": "4420", 00:38:36.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:36.973 "prchk_reftag": false, 00:38:36.973 "prchk_guard": false, 00:38:36.973 "ctrlr_loss_timeout_sec": 0, 00:38:36.973 "reconnect_delay_sec": 0, 00:38:36.973 "fast_io_fail_timeout_sec": 0, 00:38:36.973 "psk": "key0", 00:38:36.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:36.973 "hdgst": false, 00:38:36.973 "ddgst": false 00:38:36.973 } 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "method": "bdev_nvme_set_hotplug", 00:38:36.973 "params": { 00:38:36.973 "period_us": 100000, 00:38:36.973 "enable": false 00:38:36.973 } 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "method": "bdev_wait_for_examine" 00:38:36.973 } 00:38:36.973 ] 00:38:36.973 }, 00:38:36.973 { 00:38:36.973 "subsystem": "nbd", 00:38:36.973 "config": [] 00:38:36.973 } 00:38:36.973 ] 00:38:36.973 }' 00:38:36.973 09:16:59 keyring_file -- keyring/file.sh@114 -- # killprocess 2892447 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2892447 ']' 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2892447 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@954 -- # uname 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2892447 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2892447' 00:38:36.973 killing process with pid 2892447 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@968 -- # kill 2892447 00:38:36.973 Received shutdown signal, test time was about 1.000000 seconds 00:38:36.973 00:38:36.973 Latency(us) 00:38:36.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.973 =================================================================================================================== 00:38:36.973 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:36.973 09:16:59 keyring_file -- common/autotest_common.sh@973 -- # wait 2892447 00:38:37.265 09:16:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=2894016 00:38:37.265 09:16:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2894016 /var/tmp/bperf.sock 00:38:37.265 09:16:59 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2894016 ']' 00:38:37.265 09:16:59 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:37.265 09:16:59 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:37.265 09:16:59 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:37.265 09:16:59 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:37.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:37.265 09:16:59 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:37.265 09:16:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:38:37.265 "subsystems": [ 00:38:37.265 { 00:38:37.265 "subsystem": "keyring", 00:38:37.265 "config": [ 00:38:37.265 { 00:38:37.265 "method": "keyring_file_add_key", 00:38:37.265 "params": { 00:38:37.265 "name": "key0", 00:38:37.265 "path": "/tmp/tmp.j2bWCObPgh" 00:38:37.265 } 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "method": "keyring_file_add_key", 00:38:37.265 "params": { 00:38:37.265 "name": "key1", 00:38:37.265 "path": "/tmp/tmp.laAR0DqHlR" 00:38:37.265 } 00:38:37.265 } 00:38:37.265 ] 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "subsystem": "iobuf", 00:38:37.265 "config": [ 00:38:37.265 { 00:38:37.265 "method": "iobuf_set_options", 00:38:37.265 "params": { 00:38:37.265 "small_pool_count": 8192, 00:38:37.265 "large_pool_count": 1024, 00:38:37.265 "small_bufsize": 8192, 00:38:37.265 "large_bufsize": 135168 00:38:37.265 } 00:38:37.265 } 00:38:37.265 ] 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "subsystem": "sock", 00:38:37.265 "config": [ 00:38:37.265 { 00:38:37.265 "method": "sock_set_default_impl", 00:38:37.265 "params": { 00:38:37.265 "impl_name": "posix" 00:38:37.265 } 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "method": "sock_impl_set_options", 00:38:37.265 "params": { 00:38:37.265 "impl_name": "ssl", 00:38:37.265 "recv_buf_size": 4096, 00:38:37.265 "send_buf_size": 4096, 00:38:37.265 "enable_recv_pipe": true, 00:38:37.265 "enable_quickack": false, 00:38:37.265 "enable_placement_id": 0, 00:38:37.265 "enable_zerocopy_send_server": true, 00:38:37.265 "enable_zerocopy_send_client": false, 00:38:37.265 "zerocopy_threshold": 0, 00:38:37.265 "tls_version": 0, 00:38:37.265 "enable_ktls": false 00:38:37.265 } 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "method": "sock_impl_set_options", 00:38:37.265 "params": { 00:38:37.265 "impl_name": "posix", 00:38:37.265 "recv_buf_size": 2097152, 00:38:37.265 "send_buf_size": 2097152, 00:38:37.265 "enable_recv_pipe": true, 00:38:37.265 "enable_quickack": false, 00:38:37.265 "enable_placement_id": 0, 00:38:37.265 "enable_zerocopy_send_server": true, 00:38:37.265 "enable_zerocopy_send_client": false, 00:38:37.265 "zerocopy_threshold": 0, 00:38:37.265 "tls_version": 0, 00:38:37.265 "enable_ktls": false 00:38:37.265 } 00:38:37.265 } 00:38:37.265 ] 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "subsystem": "vmd", 00:38:37.265 "config": [] 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "subsystem": "accel", 00:38:37.265 "config": [ 00:38:37.265 { 00:38:37.265 "method": "accel_set_options", 00:38:37.265 "params": { 00:38:37.265 "small_cache_size": 128, 00:38:37.265 "large_cache_size": 16, 00:38:37.265 "task_count": 2048, 00:38:37.265 "sequence_count": 2048, 00:38:37.265 "buf_count": 2048 00:38:37.265 } 00:38:37.265 } 00:38:37.265 ] 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "subsystem": "bdev", 00:38:37.265 "config": [ 00:38:37.265 { 00:38:37.265 "method": "bdev_set_options", 00:38:37.265 "params": { 00:38:37.265 "bdev_io_pool_size": 65535, 00:38:37.265 "bdev_io_cache_size": 256, 00:38:37.265 "bdev_auto_examine": true, 00:38:37.265 "iobuf_small_cache_size": 128, 00:38:37.265 "iobuf_large_cache_size": 16 00:38:37.265 } 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "method": "bdev_raid_set_options", 00:38:37.265 "params": { 00:38:37.265 "process_window_size_kb": 1024 00:38:37.265 } 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "method": "bdev_iscsi_set_options", 00:38:37.265 "params": { 00:38:37.265 "timeout_sec": 30 00:38:37.265 } 00:38:37.265 }, 00:38:37.265 { 00:38:37.265 "method": "bdev_nvme_set_options", 00:38:37.265 "params": { 00:38:37.265 "action_on_timeout": "none", 00:38:37.265 "timeout_us": 0, 00:38:37.265 "timeout_admin_us": 0, 00:38:37.265 "keep_alive_timeout_ms": 10000, 00:38:37.265 "arbitration_burst": 0, 00:38:37.265 "low_priority_weight": 0, 00:38:37.265 "medium_priority_weight": 0, 00:38:37.266 "high_priority_weight": 0, 00:38:37.266 "nvme_adminq_poll_period_us": 10000, 00:38:37.266 "nvme_ioq_poll_period_us": 0, 00:38:37.266 "io_queue_requests": 512, 00:38:37.266 "delay_cmd_submit": true, 00:38:37.266 "transport_retry_count": 4, 00:38:37.266 "bdev_retry_count": 3, 00:38:37.266 "transport_ack_timeout": 0, 00:38:37.266 "ctrlr_loss_timeout_sec": 0, 00:38:37.266 "reconnect_delay_sec": 0, 00:38:37.266 "fast_io_fail_timeout_sec": 0, 00:38:37.266 "disable_auto_failback": false, 00:38:37.266 "generate_uuids": false, 00:38:37.266 "transport_tos": 0, 00:38:37.266 "nvme_error_stat": false, 00:38:37.266 "rdma_srq_size": 0, 00:38:37.266 "io_path_stat": false, 00:38:37.266 "allow_accel_sequence": false, 00:38:37.266 "rdma_max_cq_size": 0, 00:38:37.266 "rdma_cm_event_timeout_ms": 0, 00:38:37.266 "dhchap_digests": [ 00:38:37.266 "sha256", 00:38:37.266 "sha384", 00:38:37.266 "sha512" 00:38:37.266 ], 00:38:37.266 "dhchap_dhgroups": [ 00:38:37.266 "null", 00:38:37.266 "ffdhe2048", 00:38:37.266 "ffdhe3072", 00:38:37.266 "ffdhe4096", 00:38:37.266 "ffdhe6144", 00:38:37.266 "ffdhe8192" 00:38:37.266 ] 00:38:37.266 } 00:38:37.266 }, 00:38:37.266 { 00:38:37.266 "method": "bdev_nvme_attach_controller", 00:38:37.266 "params": { 00:38:37.266 "name": "nvme0", 00:38:37.266 "trtype": "TCP", 00:38:37.266 "adrfam": "IPv4", 00:38:37.266 "traddr": "127.0.0.1", 00:38:37.266 "trsvcid": "4420", 00:38:37.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:37.266 "prchk_reftag": false, 00:38:37.266 "prchk_guard": false, 00:38:37.266 "ctrlr_loss_timeout_sec": 0, 00:38:37.266 "reconnect_delay_sec": 0, 00:38:37.266 "fast_io_fail_timeout_sec": 0, 00:38:37.266 "psk": "key0", 00:38:37.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:37.266 "hdgst": false, 00:38:37.266 "ddgst": false 00:38:37.266 } 00:38:37.266 }, 00:38:37.266 { 00:38:37.266 "method": "bdev_nvme_set_hotplug", 00:38:37.266 "params": { 00:38:37.266 "period_us": 100000, 00:38:37.266 "enable": false 00:38:37.266 } 00:38:37.266 }, 00:38:37.266 { 00:38:37.266 "method": "bdev_wait_for_examine" 00:38:37.266 } 00:38:37.266 ] 00:38:37.266 }, 00:38:37.266 { 00:38:37.266 "subsystem": "nbd", 00:38:37.266 "config": [] 00:38:37.266 } 00:38:37.266 ] 00:38:37.266 }' 00:38:37.266 09:16:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:37.266 [2024-06-09 09:16:59.664206] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:37.266 [2024-06-09 09:16:59.664260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894016 ] 00:38:37.266 EAL: No free 2048 kB hugepages reported on node 1 00:38:37.266 [2024-06-09 09:16:59.739541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.266 [2024-06-09 09:16:59.793008] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.527 [2024-06-09 09:16:59.934411] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:38.099 09:17:00 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:38.099 09:17:00 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:38:38.099 09:17:00 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:38:38.099 09:17:00 keyring_file -- keyring/file.sh@120 -- # jq length 00:38:38.099 09:17:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.099 09:17:00 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:38:38.099 09:17:00 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:38:38.099 09:17:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:38.099 09:17:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.099 09:17:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.099 09:17:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:38.099 09:17:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.360 09:17:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:38.360 09:17:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:38:38.360 09:17:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:38.360 09:17:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.360 09:17:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.360 09:17:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.360 09:17:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.360 09:17:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:38:38.360 09:17:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:38:38.360 09:17:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:38:38.360 09:17:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:38.621 09:17:01 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:38:38.621 09:17:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:38.621 09:17:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.j2bWCObPgh /tmp/tmp.laAR0DqHlR 00:38:38.621 09:17:01 keyring_file -- keyring/file.sh@20 -- # killprocess 2894016 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2894016 ']' 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2894016 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@954 -- # uname 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2894016 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2894016' 00:38:38.621 killing process with pid 2894016 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@968 -- # kill 2894016 00:38:38.621 Received shutdown signal, test time was about 1.000000 seconds 00:38:38.621 00:38:38.621 Latency(us) 00:38:38.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.621 =================================================================================================================== 00:38:38.621 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:38.621 09:17:01 keyring_file -- common/autotest_common.sh@973 -- # wait 2894016 00:38:38.882 09:17:01 keyring_file -- keyring/file.sh@21 -- # killprocess 2892230 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2892230 ']' 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2892230 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@954 -- # uname 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2892230 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2892230' 00:38:38.882 killing process with pid 2892230 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@968 -- # kill 2892230 00:38:38.882 [2024-06-09 09:17:01.296257] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:38:38.882 09:17:01 keyring_file -- common/autotest_common.sh@973 -- # wait 2892230 00:38:39.143 00:38:39.143 real 0m11.080s 00:38:39.143 user 0m26.432s 00:38:39.143 sys 0m2.431s 00:38:39.143 09:17:01 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:39.143 09:17:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:39.143 ************************************ 00:38:39.143 END TEST keyring_file 00:38:39.143 ************************************ 00:38:39.143 09:17:01 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:38:39.143 09:17:01 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:39.143 09:17:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:39.143 09:17:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:39.143 09:17:01 -- common/autotest_common.sh@10 -- # set +x 00:38:39.143 ************************************ 00:38:39.143 START TEST keyring_linux 00:38:39.143 ************************************ 00:38:39.143 09:17:01 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:39.143 * Looking for test storage... 00:38:39.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:39.143 09:17:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:39.143 09:17:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:39.143 09:17:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:39.143 09:17:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:39.143 09:17:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:39.143 09:17:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:39.143 09:17:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:39.143 09:17:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:39.143 09:17:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:39.144 09:17:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:39.144 09:17:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:39.144 09:17:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:39.144 09:17:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.405 09:17:01 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:39.405 09:17:01 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.405 09:17:01 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.405 09:17:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.405 09:17:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.405 09:17:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.405 09:17:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:39.405 09:17:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:39.405 /tmp/:spdk-test:key0 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:38:39.405 09:17:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:39.405 09:17:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:39.405 /tmp/:spdk-test:key1 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2894604 00:38:39.405 09:17:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2894604 00:38:39.405 09:17:01 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 2894604 ']' 00:38:39.405 09:17:01 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:39.405 09:17:01 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:39.405 09:17:01 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:39.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:39.405 09:17:01 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:39.405 09:17:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:39.405 [2024-06-09 09:17:01.843998] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:39.405 [2024-06-09 09:17:01.844048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894604 ] 00:38:39.405 EAL: No free 2048 kB hugepages reported on node 1 00:38:39.405 [2024-06-09 09:17:01.902431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.667 [2024-06-09 09:17:01.967415] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:38:40.238 09:17:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:40.238 [2024-06-09 09:17:02.624838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:40.238 null0 00:38:40.238 [2024-06-09 09:17:02.656888] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:40.238 [2024-06-09 09:17:02.657260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.238 09:17:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:40.238 353979373 00:38:40.238 09:17:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:40.238 160550550 00:38:40.238 09:17:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2894700 00:38:40.238 09:17:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2894700 /var/tmp/bperf.sock 00:38:40.238 09:17:02 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 2894700 ']' 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:40.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:40.238 09:17:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:40.238 [2024-06-09 09:17:02.729526] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:40.238 [2024-06-09 09:17:02.729573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894700 ] 00:38:40.238 EAL: No free 2048 kB hugepages reported on node 1 00:38:40.499 [2024-06-09 09:17:02.801343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.499 [2024-06-09 09:17:02.855329] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.070 09:17:03 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:41.070 09:17:03 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:38:41.070 09:17:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:41.070 09:17:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:41.330 09:17:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:41.330 09:17:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:41.330 09:17:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:41.330 09:17:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:41.591 [2024-06-09 09:17:03.969893] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:41.591 nvme0n1 00:38:41.591 09:17:04 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:41.591 09:17:04 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:41.591 09:17:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:41.591 09:17:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:41.591 09:17:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:41.591 09:17:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:41.851 09:17:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.851 09:17:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:41.851 09:17:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@25 -- # sn=353979373 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 353979373 == \3\5\3\9\7\9\3\7\3 ]] 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 353979373 00:38:41.851 09:17:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:41.852 09:17:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:42.113 Running I/O for 1 seconds... 00:38:43.083 00:38:43.083 Latency(us) 00:38:43.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.083 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:43.083 nvme0n1 : 1.02 6347.79 24.80 0.00 0.00 19990.93 8956.59 29928.11 00:38:43.083 =================================================================================================================== 00:38:43.083 Total : 6347.79 24.80 0.00 0.00 19990.93 8956.59 29928.11 00:38:43.083 0 00:38:43.083 09:17:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:43.083 09:17:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:43.344 09:17:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:43.344 09:17:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:43.344 09:17:05 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:38:43.344 09:17:05 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:43.344 09:17:05 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:38:43.344 09:17:05 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:43.344 09:17:05 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:38:43.344 09:17:05 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:43.344 09:17:05 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:43.344 09:17:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:43.606 [2024-06-09 09:17:05.966048] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:43.606 [2024-06-09 09:17:05.966283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdc560 (107): Transport endpoint is not connected 00:38:43.606 [2024-06-09 09:17:05.967280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdc560 (9): Bad file descriptor 00:38:43.606 [2024-06-09 09:17:05.968281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:43.606 [2024-06-09 09:17:05.968288] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:43.606 [2024-06-09 09:17:05.968293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:43.606 request: 00:38:43.606 { 00:38:43.606 "name": "nvme0", 00:38:43.606 "trtype": "tcp", 00:38:43.606 "traddr": "127.0.0.1", 00:38:43.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:43.606 "adrfam": "ipv4", 00:38:43.606 "trsvcid": "4420", 00:38:43.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:43.606 "psk": ":spdk-test:key1", 00:38:43.606 "method": "bdev_nvme_attach_controller", 00:38:43.606 "req_id": 1 00:38:43.606 } 00:38:43.606 Got JSON-RPC error response 00:38:43.606 response: 00:38:43.606 { 00:38:43.606 "code": -5, 00:38:43.606 "message": "Input/output error" 00:38:43.606 } 00:38:43.606 09:17:05 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:38:43.606 09:17:05 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:43.606 09:17:05 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:43.606 09:17:05 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@33 -- # sn=353979373 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 353979373 00:38:43.606 1 links removed 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@33 -- # sn=160550550 00:38:43.606 09:17:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 160550550 00:38:43.606 1 links removed 00:38:43.606 09:17:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2894700 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 2894700 ']' 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 2894700 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2894700 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2894700' 00:38:43.606 killing process with pid 2894700 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@968 -- # kill 2894700 00:38:43.606 Received shutdown signal, test time was about 1.000000 seconds 00:38:43.606 00:38:43.606 Latency(us) 00:38:43.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.606 =================================================================================================================== 00:38:43.606 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:43.606 09:17:06 keyring_linux -- common/autotest_common.sh@973 -- # wait 2894700 00:38:43.868 09:17:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2894604 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 2894604 ']' 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 2894604 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2894604 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2894604' 00:38:43.868 killing process with pid 2894604 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@968 -- # kill 2894604 00:38:43.868 09:17:06 keyring_linux -- common/autotest_common.sh@973 -- # wait 2894604 00:38:44.130 00:38:44.130 real 0m4.853s 00:38:44.130 user 0m8.475s 00:38:44.130 sys 0m1.114s 00:38:44.130 09:17:06 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:44.130 09:17:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:44.130 ************************************ 00:38:44.130 END TEST keyring_linux 00:38:44.130 ************************************ 00:38:44.130 09:17:06 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:38:44.130 09:17:06 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:38:44.130 09:17:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:38:44.130 09:17:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:38:44.130 09:17:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:38:44.130 09:17:06 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:38:44.130 09:17:06 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:38:44.130 09:17:06 -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:44.130 09:17:06 -- common/autotest_common.sh@10 -- # set +x 00:38:44.130 09:17:06 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:38:44.130 09:17:06 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:38:44.130 09:17:06 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:38:44.130 09:17:06 -- common/autotest_common.sh@10 -- # set +x 00:38:52.276 INFO: APP EXITING 00:38:52.276 INFO: killing all VMs 00:38:52.276 INFO: killing vhost app 00:38:52.276 WARN: no vhost pid file found 00:38:52.276 INFO: EXIT DONE 00:38:54.191 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:54.191 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:54.191 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:54.191 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:54.191 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:54.191 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:54.191 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:54.191 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:54.191 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:54.191 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:54.452 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:54.452 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:54.452 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:54.452 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:54.452 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:54.452 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:54.452 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:57.753 Cleaning 00:38:57.753 Removing: /var/run/dpdk/spdk0/config 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:57.753 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:57.753 Removing: /var/run/dpdk/spdk1/config 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:57.753 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:57.753 Removing: /var/run/dpdk/spdk1/mp_socket 00:38:57.753 Removing: /var/run/dpdk/spdk2/config 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:57.753 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:57.753 Removing: /var/run/dpdk/spdk3/config 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:57.753 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:57.753 Removing: /var/run/dpdk/spdk4/config 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:57.753 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:57.753 Removing: /dev/shm/bdev_svc_trace.1 00:38:57.753 Removing: /dev/shm/nvmf_trace.0 00:38:57.753 Removing: /dev/shm/spdk_tgt_trace.pid2352314 00:38:57.753 Removing: /var/run/dpdk/spdk0 00:38:57.753 Removing: /var/run/dpdk/spdk1 00:38:57.753 Removing: /var/run/dpdk/spdk2 00:38:57.753 Removing: /var/run/dpdk/spdk3 00:38:57.753 Removing: /var/run/dpdk/spdk4 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2350694 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2352314 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2352911 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2353966 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2354295 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2355378 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2355686 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2355867 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2356937 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2357461 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2357783 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2358166 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2358573 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2358960 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2359230 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2359379 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2359733 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2361019 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2364372 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2364741 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2365101 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2365138 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2365742 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2365825 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2366222 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2366529 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2366873 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2366912 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2367269 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2367288 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2367809 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2368077 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2368464 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2368838 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2368863 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2368947 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2369277 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2369629 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2369982 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2370200 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2370389 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2370725 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2371072 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2371421 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2371656 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2371846 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2372160 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2372513 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2372864 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2373166 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2373363 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2373601 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2373958 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2374314 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2374663 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2374905 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2375086 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2375279 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2379787 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2477853 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2482886 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2494569 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2500935 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2505950 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2506735 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2523807 00:38:57.754 Removing: /var/run/dpdk/spdk_pid2524289 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2529747 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2536538 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2539613 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2551753 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2562431 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2564541 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2565751 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2586592 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2591148 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2622421 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2628338 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2630340 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2632562 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2632696 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2633030 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2633248 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2633861 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2636109 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2637183 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2637561 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2640263 00:38:58.015 Removing: /var/run/dpdk/spdk_pid2640967 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2641683 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2646727 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2653261 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2659082 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2703665 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2708476 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2715939 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2717449 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2719274 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2724358 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2729086 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2738100 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2738119 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2743050 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2743185 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2743523 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2743989 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2744146 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2745398 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2747383 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2749273 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2751216 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2753216 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2755216 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2762684 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2763213 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2764841 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2766032 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2772202 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2775551 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2781718 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2788136 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2797700 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2806358 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2806360 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2829088 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2829776 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2830459 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2831140 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2832195 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2832889 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2833571 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2834264 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2839295 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2839629 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2846646 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2847023 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2849526 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2856634 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2856639 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2862495 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2864800 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2867630 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2868961 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2871373 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2872681 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2882603 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2883248 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2883771 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2886556 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2887224 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2887807 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2892230 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2892447 00:38:58.016 Removing: /var/run/dpdk/spdk_pid2894016 00:38:58.278 Removing: /var/run/dpdk/spdk_pid2894604 00:38:58.278 Removing: /var/run/dpdk/spdk_pid2894700 00:38:58.278 Clean 00:38:58.278 09:17:20 -- common/autotest_common.sh@1450 -- # return 0 00:38:58.278 09:17:20 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:38:58.278 09:17:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:58.278 09:17:20 -- common/autotest_common.sh@10 -- # set +x 00:38:58.278 09:17:20 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:38:58.278 09:17:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:58.278 09:17:20 -- common/autotest_common.sh@10 -- # set +x 00:38:58.278 09:17:20 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:58.278 09:17:20 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:58.278 09:17:20 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:58.278 09:17:20 -- spdk/autotest.sh@391 -- # hash lcov 00:38:58.278 09:17:20 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:58.278 09:17:20 -- spdk/autotest.sh@393 -- # hostname 00:38:58.278 09:17:20 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:58.560 geninfo: WARNING: invalid characters removed from testname! 00:39:25.164 09:17:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:25.164 09:17:46 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:26.108 09:17:48 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:27.496 09:17:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:30.045 09:17:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:31.431 09:17:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:32.817 09:17:55 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:32.817 09:17:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:32.817 09:17:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:32.817 09:17:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:32.817 09:17:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:32.817 09:17:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.817 09:17:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.817 09:17:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.817 09:17:55 -- paths/export.sh@5 -- $ export PATH 00:39:32.817 09:17:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.817 09:17:55 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:32.817 09:17:55 -- common/autobuild_common.sh@437 -- $ date +%s 00:39:32.817 09:17:55 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717917475.XXXXXX 00:39:32.817 09:17:55 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717917475.D7XX5i 00:39:32.817 09:17:55 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:39:32.817 09:17:55 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:39:32.817 09:17:55 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:39:32.817 09:17:55 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:32.817 09:17:55 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:32.817 09:17:55 -- common/autobuild_common.sh@453 -- $ get_config_params 00:39:32.817 09:17:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:39:32.817 09:17:55 -- common/autotest_common.sh@10 -- $ set +x 00:39:32.817 09:17:55 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:39:32.817 09:17:55 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:39:32.817 09:17:55 -- pm/common@17 -- $ local monitor 00:39:32.817 09:17:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:32.817 09:17:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:32.817 09:17:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:32.817 09:17:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:32.817 09:17:55 -- pm/common@21 -- $ date +%s 00:39:32.817 09:17:55 -- pm/common@25 -- $ sleep 1 00:39:32.817 09:17:55 -- pm/common@21 -- $ date +%s 00:39:32.817 09:17:55 -- pm/common@21 -- $ date +%s 00:39:32.817 09:17:55 -- pm/common@21 -- $ date +%s 00:39:32.817 09:17:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717917475 00:39:32.817 09:17:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717917475 00:39:32.817 09:17:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717917475 00:39:32.817 09:17:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717917475 00:39:32.817 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717917475_collect-vmstat.pm.log 00:39:32.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717917475_collect-cpu-load.pm.log 00:39:32.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717917475_collect-cpu-temp.pm.log 00:39:32.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717917475_collect-bmc-pm.bmc.pm.log 00:39:34.203 09:17:56 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:39:34.203 09:17:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:39:34.203 09:17:56 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:34.203 09:17:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:34.203 09:17:56 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:34.203 09:17:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:34.203 09:17:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:34.203 09:17:56 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:34.203 09:17:56 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:34.203 09:17:56 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:34.203 09:17:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:34.203 09:17:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:34.203 09:17:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:34.203 09:17:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:34.203 09:17:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:34.203 09:17:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:34.203 09:17:56 -- pm/common@44 -- $ pid=2907523 00:39:34.203 09:17:56 -- pm/common@50 -- $ kill -TERM 2907523 00:39:34.203 09:17:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:34.203 09:17:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:34.203 09:17:56 -- pm/common@44 -- $ pid=2907524 00:39:34.203 09:17:56 -- pm/common@50 -- $ kill -TERM 2907524 00:39:34.203 09:17:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:34.203 09:17:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:34.203 09:17:56 -- pm/common@44 -- $ pid=2907526 00:39:34.203 09:17:56 -- pm/common@50 -- $ kill -TERM 2907526 00:39:34.203 09:17:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:34.203 09:17:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:34.203 09:17:56 -- pm/common@44 -- $ pid=2907543 00:39:34.203 09:17:56 -- pm/common@50 -- $ sudo -E kill -TERM 2907543 00:39:34.203 + [[ -n 2232612 ]] 00:39:34.203 + sudo kill 2232612 00:39:34.214 [Pipeline] } 00:39:34.233 [Pipeline] // stage 00:39:34.238 [Pipeline] } 00:39:34.256 [Pipeline] // timeout 00:39:34.261 [Pipeline] } 00:39:34.278 [Pipeline] // catchError 00:39:34.284 [Pipeline] } 00:39:34.301 [Pipeline] // wrap 00:39:34.307 [Pipeline] } 00:39:34.323 [Pipeline] // catchError 00:39:34.333 [Pipeline] stage 00:39:34.336 [Pipeline] { (Epilogue) 00:39:34.351 [Pipeline] catchError 00:39:34.353 [Pipeline] { 00:39:34.368 [Pipeline] echo 00:39:34.369 Cleanup processes 00:39:34.375 [Pipeline] sh 00:39:34.665 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:34.665 2907629 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:34.665 2908071 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:34.680 [Pipeline] sh 00:39:34.968 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:34.968 ++ grep -v 'sudo pgrep' 00:39:34.968 ++ awk '{print $1}' 00:39:34.968 + sudo kill -9 2907629 00:39:34.985 [Pipeline] sh 00:39:35.317 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:47.564 [Pipeline] sh 00:39:47.848 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:47.848 Artifacts sizes are good 00:39:47.863 [Pipeline] archiveArtifacts 00:39:47.870 Archiving artifacts 00:39:48.130 [Pipeline] sh 00:39:48.420 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:48.437 [Pipeline] cleanWs 00:39:48.447 [WS-CLEANUP] Deleting project workspace... 00:39:48.447 [WS-CLEANUP] Deferred wipeout is used... 00:39:48.455 [WS-CLEANUP] done 00:39:48.457 [Pipeline] } 00:39:48.476 [Pipeline] // catchError 00:39:48.488 [Pipeline] sh 00:39:48.776 + logger -p user.info -t JENKINS-CI 00:39:48.785 [Pipeline] } 00:39:48.800 [Pipeline] // stage 00:39:48.806 [Pipeline] } 00:39:48.823 [Pipeline] // node 00:39:48.828 [Pipeline] End of Pipeline 00:39:48.869 Finished: SUCCESS